Tag Archives: AI

NYU Langone Health / NYU School of Medicine study: Artificial intelligence can diagnose PTSD by analyzing voices

23 Apr

Live Science described AI in What Is Artificial Intelligence?:

One of the standard textbooks in the field, by University of California computer scientists Stuart Russell and Google’s director of research, Peter Norvig, puts artificial intelligence in to four broad categories:
The differences between them can be subtle, notes Ernest Davis, a professor of computer science at New York University. AlphaGo, the computer program that beat a world champion at Go, acts rationally when it plays the game (it plays to win). But it doesn’t necessarily think the way a human being does, though it engages in some of the same pattern-recognition tasks. Similarly, a machine that acts like a human doesn’t necessarily bear much resemblance to people in the way it processes information.
• machines that think like humans,
• machines that act like humans,
• machines that think rationally,
• machines that act rationally.
Even IBM’s Watson, which acted somewhat like a human when playing Jeopardy, wasn’t using anything like the rational processes humans use.
Tough tasks
Davis says he uses another definition, centered on what one wants a computer to do. “There are a number of cognitive tasks that people do easily — often, indeed, with no conscious thought at all — but that are extremely hard to program on computers. Archetypal examples are vision and natural language understanding. Artificial intelligence, as I define it, is the study of getting computers to carry out these tasks,” he said….
Computer vision has made a lot of strides in the past decade — cameras can now recognize faces Other tasks, though, are proving tougher. For example, Davis and NYU psychology professor Gary Marcus wrote in the Communications of the Association for Computing Machinery of “common sense” tasks that computers find very difficult. A robot serving drinks, for example, can be programmed to recognize a request for one, and even to manipulate a glass and pour one. But if a fly lands in the glass the computer still has a tough time deciding whether to pour the drink in and serve it (or not).
Common sense
The issue is that much of “common sense” is very hard to model. Computer scientists have taken several approaches to get around that problem. IBM’s Watson, for instance, was able to do so well on Jeopardy! because it had a huge database of knowledge to work with and a few rules to string words together to make questions and answers. Watson, though, would have a difficult time with a simple open-ended conversation.
Beyond tasks, though, is the issue of learning. Machines can learn, said Kathleen McKeown, a professor of computer science at Columbia University. “Machine learning is a kind of AI,” she said.
Some machine learning works in a way similar to the way people do it, she noted. Google Translate, for example, uses a large corpus of text in a given language to translate to another language, a statistical process that doesn’t involve looking for the “meaning” of words. Humans, she said, do something similar, in that we learn languages by seeing lots of examples.
That said, Google Translate doesn’t always get it right, precisely because it doesn’t seek meaning and can sometimes be fooled by synonyms or differing connotations….
The upshot is AIs that can handle certain tasks well exist, as do AIs that look almost human because they have a large trove of data to work with. Computer scientists have been less successful coming up with an AI that can think the way we expect a human being to, or to act like a human in more than very limited situations…. https://www.livescience.com/55089-artificial-intelligence.html

NYU scientists used AI to diagnose PTSD which is short for Post-Traumatic Stress Disorder.

The National Institute of Mental Health defined PTSD:

Post-Traumatic Stress Disorder
Overview
PTSD is a disorder that develops in some people who have experienced a shocking, scary, or dangerous event.
It is natural to feel afraid during and after a traumatic situation. Fear triggers many split-second changes in the body to help defend against danger or to avoid it. This “fight-or-flight” response is a typical reaction meant to protect a person from harm. Nearly everyone will experience a range of reactions after trauma, yet most people recover from initial symptoms naturally. Those who continue to experience problems may be diagnosed with PTSD. People who have PTSD may feel stressed or frightened even when they are not in danger.
Signs and Symptoms
Not every traumatized person develops ongoing (chronic) or even short-term (acute) PTSD. Not everyone with PTSD has been through a dangerous event. Some experiences, like the sudden, unexpected death of a loved one, can also cause PTSD. Symptoms usually begin early, within 3 months of the traumatic incident, but sometimes they begin years afterward. Symptoms must last more than a month and be severe enough to interfere with relationships or work to be considered PTSD. The course of the illness varies. Some people recover within 6 months, while others have symptoms that last much longer. In some people, the condition becomes chronic.
A doctor who has experience helping people with mental illnesses, such as a psychiatrist or psychologist, can diagnose PTSD.
To be diagnosed with PTSD, an adult must have all of the following for at least 1 month:
• At least one re-experiencing symptom
• At least one avoidance symptom
• At least two arousal and reactivity symptoms
• At least two cognition and mood symptoms
Re-experiencing symptoms include:
• Flashbacks—reliving the trauma over and over, including physical symptoms like a racing heart or sweating
• Bad dreams
• Frightening thoughts
Re-experiencing symptoms may cause problems in a person’s everyday routine. The symptoms can start from the person’s own thoughts and feelings. Words, objects, or situations that are reminders of the event can also trigger re-experiencing symptoms.
Avoidance symptoms include:
• Staying away from places, events, or objects that are reminders of the traumatic experience
• Avoiding thoughts or feelings related to the traumatic event
Things that remind a person of the traumatic event can trigger avoidance symptoms. These symptoms may cause a person to change his or her personal routine. For example, after a bad car accident, a person who usually drives may avoid driving or riding in a car.
Arousal and reactivity symptoms include:
• Being easily startled
• Feeling tense or “on edge”
• Having difficulty sleeping
• Having angry outbursts
Arousal symptoms are usually constant, instead of being triggered by things that remind one of the traumatic events. These symptoms can make the person feel stressed and angry. They may make it hard to do daily tasks, such as sleeping, eating, or concentrating.
Cognition and mood symptoms include:
• Trouble remembering key features of the traumatic event
• Negative thoughts about oneself or the world
• Distorted feelings like guilt or blame
• Loss of interest in enjoyable activities
Cognition and mood symptoms can begin or worsen after the traumatic event, but are not due to injury or substance use. These symptoms can make the person feel alienated or detached from friends or family members.
It is natural to have some of these symptoms after a dangerous event. Sometimes people have very serious symptoms that go away after a few weeks. This is called acute stress disorder, or ASD. When the symptoms last more than a month, seriously affect one’s ability to function, and are not due to substance use, medical illness, or anything except the event itself, they might be PTSD. Some people with PTSD don’t show any symptoms for weeks or months. PTSD is often accompanied by depression, substance abuse, or one or more of the other anxiety disorders….
https://www.nimh.nih.gov/health/topics/post-traumatic-stress-disorder-ptsd/index.shtml

See, Recognizing PTSD Early Warning Signs, Matthew Tull, PhD https://www.verywellmind.com/recognizing-ptsd-early-warning-signs-2797569

Science Daily reported in Artificial intelligence can diagnose PTSD by analyzing voices:

A specially designed computer program can help diagnose post-traumatic stress disorder (PTSD) in veterans by analyzing their voices, a new study finds.
Published online April 22 in the journal Depression and Anxiety, the study found that an artificial intelligence tool can distinguish — with 89 percent accuracy — between the voices of those with or without PTSD.
“Our findings suggest that speech-based characteristics can be used to diagnose this disease, and with further refinement and validation, may be employed in the clinic in the near future,” says senior study author Charles R. Marmar, MD, the Lucius N. Littauer Professor and chair of the Department of Psychiatry at NYU School of Medicine.
More than 70 percent of adults worldwide experience a traumatic event at some point in their lives, with up to 12 percent of people in some struggling countries suffering from PTSD. Those with the condition experience strong, persistent distress when reminded of a triggering event.
The study authors say that a PTSD diagnosis is most often determined by clinical interview or a self-report assessment, both inherently prone to biases. This has led to efforts to develop objective, measurable, physical markers of PTSD progression, much like laboratory values for medical conditions, but progress has been slow.
Learning How to Learn
In the current study, the research team used a statistical/machine learning technique, called random forests, that has the ability to “learn” how to classify individuals based on examples. Such AI programs build “decision” rules and mathematical models that enable decision-making with increasing accuracy as the amount of training data grows.
The researchers first recorded standard, hours-long diagnostic interviews, called Clinician-Administered PTSD Scale, or CAPS, of 53 Iraq and Afghanistan veterans with military-service-related PTSD, as well as those of 78 veterans without the disease. The recordings were then fed into voice software from SRI International — the institute that also invented Siri — to yield a total of 40,526 speech-based features captured in short spurts of talk, which the team’s AI program sifted through for patterns.
The random forest program linked patterns of specific voice features with PTSD, including less clear speech and a lifeless, metallic tone, both of which had long been reported anecdotally as helpful in diagnosis. While the current study did not explore the disease mechanisms behind PTSD, the theory is that traumatic events change brain circuits that process emotion and muscle tone, which affects a person’s voice.
Moving forward, the research team plans to train the AI voice tool with more data, further validate it on an independent sample, and apply for government approval to use the tool clinically.
“Speech is an attractive candidate for use in an automated diagnostic system, perhaps as part of a future PTSD smartphone app, because it can be measured cheaply, remotely, and non-intrusively,” says lead author Adam Brown, PhD, adjunct assistant professor in the Department of Psychiatry at NYU School of Medicine.
“The speech analysis technology used in the current study on PTSD detection falls into the range of capabilities included in our speech analytics platform called SenSay Analytics™,” says Dimitra Vergyri, director of SRI International’s Speech Technology and Research (STAR) Laboratory. “The software analyzes words — in combination with frequency, rhythm, tone, and articulatory characteristics of speech — to infer the state of the speaker, including emotion, sentiment, cognition, health, mental health and communication quality. The technology has been involved in a series of industry applications visible in startups like Oto, Ambit and Decoded Health.” https://www.sciencedaily.com/releases/2019/04/190422082232.htm

Citation:

Artificial intelligence can diagnose PTSD by analyzing voices
Study tests potential telemedicine approach
Date: April 22, 2019
Source: NYU Langone Health / NYU School of Medicine
Summary:
A specially designed computer program can help to diagnose post-traumatic stress disorder (PTSD) in veterans by analyzing their voices.

Speech‐based markers for posttraumatic stress disorder in US veterans
First published: 22 April 2019
https://doi.org/10.1002/da.22890
Preliminary findings from this study were presented at the 16th annual conference of the International Speech Communication Association, Dresden, Germany, September 6–10, 2015.
Charles R. Marmar
Corresponding Author
E-mail address: Charles.Marmar@nyulangone.org
http://orcid.org/0000-0001-8427-5607
Department of Psychiatry, New York University School of Medicine, New York, New York
Steven and Alexandra Cohen Veterans Center for the Study of Post‐Traumatic Stress and Traumatic Brain Injury, New York, New York
Marmar and Brown should be have considered joint first authors.
Correspondence Charles R. Marmar, M.D., Department of Psychiatry, New York University School of Medicine, 1 Park Avenue, New York, NY 10016. Email: Charles.Marmar@nyulangone.org
Background
The diagnosis of posttraumatic stress disorder (PTSD) is usually based on clinical interviews or self‐report measures. Both approaches are subject to under‐ and over‐reporting of symptoms. An objective test is lacking. We have developed a classifier of PTSD based on objective speech‐marker features that discriminate PTSD cases from controls.
Methods
Speech samples were obtained from warzone‐exposed veterans, 52 cases with PTSD and 77 controls, assessed with the Clinician‐Administered PTSD Scale. Individuals with major depressive disorder (MDD) were excluded. Audio recordings of clinical interviews were used to obtain 40,526 speech features which were input to a random forest (RF) algorithm.
Results
The selected RF used 18 speech features and the receiver operating characteristic curve had an area under the curve (AUC) of 0.954. At a probability of PTSD cut point of 0.423, Youden’s index was 0.787, and overall correct classification rate was 89.1%. The probability of PTSD was higher for markers that indicated slower, more monotonous speech, less change in tonality, and less activation. Depression symptoms, alcohol use disorder, and TBI did not meet statistical tests to be considered confounders.
Conclusions
This study demonstrates that a speech‐based algorithm can objectively differentiate PTSD cases from controls. The RF classifier had a high AUC. Further validation in an independent sample and appraisal of the classifier to identify those with MDD only compared with those with PTSD comorbid with MDD is required.

Here is the press release from NYU:

NEWS RELEASE 22-APR-2019
Artificial intelligence can diagnose PTSD by analyzing voices
Study tests potential telemedicine approach
NYU LANGONE HEALTH / NYU SCHOOL OF MEDICINE
VIDEO: NYU School of Medicine researchers say artificial intelligence could be used to diagnose PTSD by analyzing voices. view more
Credit: NYU School of Medicine
A specially designed computer program can help diagnose post-traumatic stress disorder (PTSD) in veterans by analyzing their voices, a new study finds.
Published online April 22 in the journal Depression and Anxiety, the study found that an artificial intelligence tool can distinguish – with 89 percent accuracy – between the voices of those with or without PTSD.
“Our findings suggest that speech-based characteristics can be used to diagnose this disease, and with further refinement and validation, may be employed in the clinic in the near future,” says senior study author Charles R. Marmar, MD, the Lucius N. Littauer Professor and chair of the Department of Psychiatry at NYU School of Medicine.
More than 70 percent of adults worldwide experience a traumatic event at some point in their lives, with up to 12 percent of people in some struggling countries suffering from PTSD. Those with the condition experience strong, persistent distress when reminded of a triggering event.
The study authors say that a PTSD diagnosis is most often determined by clinical interview or a self-report assessment, both inherently prone to biases. This has led to efforts to develop objective, measurable, physical markers of PTSD progression, much like laboratory values for medical conditions, but progress has been slow.
Learning How to Learn
In the current study, the research team used a statistical/machine learning technique, called random forests, that has the ability to “learn” how to classify individuals based on examples. Such AI programs build “decision” rules and mathematical models that enable decision-making with increasing accuracy as the amount of training data grows.
The researchers first recorded standard, hours-long diagnostic interviews, called Clinician-Administered PTSD Scale, or CAPS, of 53 Iraq and Afghanistan veterans with military-service-related PTSD, as well as those of 78 veterans without the disease. The recordings were then fed into voice software from SRI International – the institute that also invented Siri – to yield a total of 40,526 speech-based features captured in short spurts of talk, which the team’s AI program sifted through for patterns.
The random forest program linked patterns of specific voice features with PTSD, including less clear speech and a lifeless, metallic tone, both of which had long been reported anecdotally as helpful in diagnosis. While the current study did not explore the disease mechanisms behind PTSD, the theory is that traumatic events change brain circuits that process emotion and muscle tone, which affects a person’s voice.
Moving forward, the research team plans to train the AI voice tool with more data, further validate it on an independent sample, and apply for government approval to use the tool clinically.
“Speech is an attractive candidate for use in an automated diagnostic system, perhaps as part of a future PTSD smartphone app, because it can be measured cheaply, remotely, and non-intrusively,” says lead author Adam Brown, PhD, adjunct assistant professor in the Department of Psychiatry at NYU School of Medicine.
“The speech analysis technology used in the current study on PTSD detection falls into the range of capabilities included in our speech analytics platform called SenSay Analytics™,” says Dimitra Vergyri, director of SRI International’s Speech Technology and Research (STAR) Laboratory. “The software analyzes words – in combination with frequency, rhythm, tone, and articulatory characteristics of speech – to infer the state of the speaker, including emotion, sentiment, cognition, health, mental health and communication quality. The technology has been involved in a series of industry applications visible in startups like Oto, Ambit and Decoded Health.”
###
Along with Marmar and Brown, authors of the study from the Department of Psychiatry were Meng Qian, Eugene Laska, Carole Siegel, Meng Li, and Duna Abu-Amara. Study authors from SRI International were Andreas Tsiartas, Dimitra Vergyri, Colleen Richey, Jennifer Smith, and Bruce Knoth. Brown is also an associate professor of psychology at the New School for Social Research.
The study was supported by the U.S. Army Medical Research & Acquisition Activity (USAMRAA) and Telemedicine & Advanced Technology Research Center (TATRC) grant W81XWH- ll-C-0004, as well as by the Steven and Alexandra Cohen Foundation.
Media Inquiries:
Jim Mandler
(212) 404-3500
jim.mandler@nyulangone.org
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Resources:

Artificial Intelligence Will Redesign Healthcare                     https://medicalfuturist.com/artificial-intelligence-will-redesign-healthcare

9 Ways Artificial Intelligence is Affecting the Medical Field https://www.healthcentral.com/slideshow/8-ways-artificial-intelligence-is-affecting-the-medical-field#slide=2

Where information leads to Hope. © Dr. Wilda.com

Dr. Wilda says this about that ©

Blogs by Dr. Wilda:

COMMENTS FROM AN OLD FART©
http://drwildaoldfart.wordpress.com/

Dr. Wilda Reviews ©
http://drwildareviews.wordpress.com/

Dr. Wilda ©
https://drwilda.com/

University of California Berkeley study: Artificial intelligence advances threaten privacy of health data

6 Jan

Joseph Jerome, CIPP/US wrote in the 2016 article, Why artificial intelligence may be the next big privacy trend:

What that looks like will vary, but it is likely that the same far-reaching and broad worries about fairness and accountability that have dogged every discussion about big data — and informed the FTC’s January Big Data Report — will present serious concerns for certain applications of AI. While “Preparing for the Future of Artificial Intelligence” is largely an exercise in stage-setting, the report is likely a harbinger of the same type of attention and focus that emerged within the advocacy community in the wake of the White House’s 2014 Big Data Report. For the privacy profession, the report hints at a few areas where our attention ought to be directed.
First, AI is still a nascent, immature field of engineering, and promoting that maturation process will involve a variety of different training and capacity-building efforts. The report explicitly recommends that ethical training, as well as training in security, privacy, and safety, should become an integral part of the curricula on AI, machine learning, and computer and data science at universities. Moving forward, one could imagine that ethical and other non-technical training will also be an important component of our STEM policies at large. Beyond formal education, however, building awareness among actual AI practitioners and developers will be essential to mitigate disconcerting or unintended behaviors, and to bolster public confidence in the application of artificial intelligence. Policymakers, federal agencies and civil society will need more in-house technical expertise to become more conversant on the current capabilities of artificial intelligence.
Second, while transparency is generally trotted out as the best of disinfectants, balancing transparency in the realm of AI will be a tremendous challenge for both competitive reasons and the “black box” nature of what we’re dealing with. While the majority of basic AI research is currently conducted by academics and commercial labs that collaborate to announce and publish their findings, the report ominously notes that competitive instincts could drive commercial labs towards increased secrecy, inhibiting the ability to monitor the progress of AI development and raising public concerns. But even if we can continue to promote transparency in the development of AI, it may be difficult for anyone whether they be auditors, consumers, or regulators to understand, predict, or explain the behaviors of more sophisticated AI systems.
But even if we can continue to promote transparency in the development of AI, it may be difficult for anyone whether they be auditors, consumers, or regulators to understand, predict, or explain the behaviors of more sophisticated AI systems.
The alternative appears to be bolstering accountability frameworks, but what exactly that looks like in this context is anyone’s guess. The report largely places its hopes on finding technical solutions to address accountability with respect to AI, and an IEEE effort on autonomous systems that I’ve been involved with has faced a similar roadblock. But if we have to rely on technical tools to put good intentions into practice, we will need more discussion about what those tools will be and how industry and individuals alike will be able to use them.
The Sky(net) isn’t falling, but…                                                                https://iapp.org/news/a/why-artificial-intelligence-may-be-the-next-big-privacy-trend/

A University of California Berkeley study reported there could be problem with the use of AI and privacy issues in health data.

Science Daily reported in Artificial intelligence advances threaten privacy of health data:

Led by UC Berkeley engineer Anil Aswani, the study suggests current laws and regulations are nowhere near sufficient to keep an individual’s health status private in the face of AI development. The research was published Dec. 21 in the JAMA Network Open journal.
The findings show that by using artificial intelligence, it is possible to identify individuals by learning daily patterns in step data, such as that collected by activity trackers, smartwatches and smartphones, and correlating it to demographic data.
The mining of two years’ worth of data covering more than 15,000 Americans led to the conclusion that the privacy standards associated with 1996’s HIPAA (Health Insurance Portability and Accountability Act) legislation need to be revisited and reworked.
“We wanted to use NHANES (the National Health and Nutrition Examination Survey) to look at privacy questions because this data is representative of the diverse population in the U.S.,” said Aswani. “The results point out a major problem. If you strip all the identifying information, it doesn’t protect you as much as you’d think. Someone else can come back and put it all back together if they have the right kind of information.”
“In principle, you could imagine Facebook gathering step data from the app on your smartphone, then buying health care data from another company and matching the two,” he added. “Now they would have health care data that’s matched to names, and they could either start selling advertising based on that or they could sell the data to others.”
According to Aswani, the problem isn’t with the devices, but with how the information the devices capture can be misused and potentially sold on the open market.
“I’m not saying we should abandon these devices,” he said. “But we need to be very careful about how we are using this data. We need to protect the information. If we can do that, it’s a net positive.”
Though the study specifically looked at step data, the results suggest a broader threat to the privacy of health data…. https://www.sciencedaily.com/releases/2019/01/190103152906.htm

Citation:

Artificial intelligence advances threaten privacy of health data
Study finds current laws and regulations do not safeguard individuals’ confidential health information
Date: January 3, 2019
Source: University of California – Berkeley
Summary:
Advances in artificial intelligence, including activity trackers, smartphones and smartwatches, threaten the privacy of people’s health data, according to new research.

Journal Reference:
Liangyuan Na, Cong Yang, Chi-Cheng Lo, Fangyuan Zhao, Yoshimi Fukuoka, Anil Aswani. Feasibility of Reidentifying Individuals in Large National Physical Activity Data Sets From Which Protected Health Information Has Been Removed With Use of Machine Learning. JAMA Network Open, 2018; 1 (8): e186040 DOI: 10.1001/jamanetworkopen.2018.6040

Here is a portion of the JAMA abstract:

Original Investigation
Health Policy
December 21, 2018
Feasibility of Reidentifying Individuals in Large National Physical Activity Data Sets From Which Protected Health Information Has Been Removed With Use of Machine Learning
Liangyuan Na, BA1; Cong Yang, BS2; Chi-Cheng Lo, BS2; et al Fangyuan Zhao, BS3; Yoshimi Fukuoka, PhD, RN4; Anil Aswani, PhD2
Author Affiliations Article Information
JAMA Netw Open. 2018;1(8):e186040. doi:10.1001/jamanetworkopen.2018.6040
Thomas H. McCoy Jr, MD; Michael C. Hughes, PhD
Key Points
Question Is it possible to reidentify physical activity data that have had protected health information removed by using machine learning?
Findings This cross-sectional study used national physical activity data from 14 451 individuals from the National Health and Nutrition Examination Surveys 2003-2004 and 2005-2006. Linear support vector machine and random forests reidentified the 20-minute-level physical activity data of approximately 80% of children and 95% of adults.
Meaning The findings of this study suggest that current practices for deidentifying physical activity data are insufficient for privacy and that deidentification should aggregate the physical activity data of many people to ensure individuals’ privacy.
Abstract
Importance Despite data aggregation and removal of protected health information, there is concern that deidentified physical activity (PA) data collected from wearable devices can be reidentified. Organizations collecting or distributing such data suggest that the aforementioned measures are sufficient to ensure privacy. However, no studies, to our knowledge, have been published that demonstrate the possibility or impossibility of reidentifying such activity data.
Objective To evaluate the feasibility of reidentifying accelerometer-measured PA data, which have had geographic and protected health information removed, using support vector machines (SVMs) and random forest methods from machine learning.
Design, Setting, and Participants In this cross-sectional study, the National Health and Nutrition Examination Survey (NHANES) 2003-2004 and 2005-2006 data sets were analyzed in 2018. The accelerometer-measured PA data were collected in a free-living setting for 7 continuous days. NHANES uses a multistage probability sampling design to select a sample that is representative of the civilian noninstitutionalized household (both adult and children) population of the United States.
Exposures The NHANES data sets contain objectively measured movement intensity as recorded by accelerometers worn during all walking for 1 week.
Main Outcomes and Measures The primary outcome was the ability of the random forest and linear SVM algorithms to match demographic and 20-minute aggregated PA data to individual-specific record numbers, and the percentage of correct matches by each machine learning algorithm was the measure…. https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2719130?resultClick=3

Here is the press release from UC Berkeley:

PUBLIC RELEASE: 3-JAN-2019
Artificial intelligence advances threaten privacy of health data
Study finds current laws and regulations do not safeguard individuals’ confidential health information
Advances in artificial intelligence have created new threats to the privacy of people’s health data, a new University of California, Berkeley, study shows.
Led by UC Berkeley engineer Anil Aswani, the study suggests current laws and regulations are nowhere near sufficient to keep an individual’s health status private in the face of AI development. The research was published Dec. 21 in the JAMA Network Open journal.
The findings show that by using artificial intelligence, it is possible to identify individuals by learning daily patterns in step data, such as that collected by activity trackers, smartwatches and smartphones, and correlating it to demographic data.
The mining of two years’ worth of data covering more than 15,000 Americans led to the conclusion that the privacy standards associated with 1996’s HIPAA (Health Insurance Portability and Accountability Act) legislation need to be revisited and reworked.
“We wanted to use NHANES (the National Health and Nutrition Examination Survey) to look at privacy questions because this data is representative of the diverse population in the U.S.,” said Aswani. “The results point out a major problem. If you strip all the identifying information, it doesn’t protect you as much as you’d think. Someone else can come back and put it all back together if they have the right kind of information.”
“In principle, you could imagine Facebook gathering step data from the app on your smartphone, then buying health care data from another company and matching the two,” he added. “Now they would have health care data that’s matched to names, and they could either start selling advertising based on that or they could sell the data to others.”
According to Aswani, the problem isn’t with the devices, but with how the information the devices capture can be misused and potentially sold on the open market.
“I’m not saying we should abandon these devices,” he said. “But we need to be very careful about how we are using this data. We need to protect the information. If we can do that, it’s a net positive.”
Though the study specifically looked at step data, the results suggest a broader threat to the privacy of health data.
“HIPAA regulations make your health care private, but they don’t cover as much as you think,” Aswani said. “Many groups, like tech companies, are not covered by HIPAA, and only very specific pieces of information are not allowed to be shared by current HIPAA rules. There are companies buying health data. It’s supposed to be anonymous data, but their whole business model is to find a way to attach names to this data and sell it.”
Aswani said advances in AI make it easier for companies to gain access to health data, the temptation for companies to use it in illegal or unethical ways will increase. Employers, mortgage lenders, credit card companies and others could potentially use AI to discriminate based on pregnancy or disability status, for instance.
“Ideally, what I’d like to see from this are new regulations or rules that protect health data,” he said. “But there is actually a big push to even weaken the regulations right now. For instance, the rule-making group for HIPAA has requested comments on increasing data sharing. The risk is that if people are not aware of what’s happening, the rules we have will be weakened. And the fact is the risks of us losing control of our privacy when it comes to health care are actually increasing and not decreasing.”
###
Co-authors of the study are Liangyuan Na of MIT; Cong Yang and Chi-Cheng Lo of UC Berkeley; Fangyuan Zhao of Tsinghua University in China; and Yoshimi Fukuoka of UCSF.
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

RAND Corporation has information about health care privacy at https://www.rand.org/topics/health-information-privacy.html

StaySafeOnline described health care privacy issues in the article, Health Information Privacy – Why Should We Care?

• Health data is very personal and may contain information we wish to keep confidential (e.g., mental health records) or potentially impact employment prospects or insurance coverage (e.g., chronic disease or family health history).
• It is long living – an exposed credit card can be canceled, but your medical history stays with you a lifetime.
• It is very complete and comprehensive – the information health care organizations have about their patients includes not only medical data, but also insurance and financial account information. This could be personal information like Social Security numbers, addresses or even the names of next of kin. Such a wealth of data can be monetized by cyber adversaries in many ways.
• In our digital health care world, the reliable availability of accurate health data to clinicians is critical to care delivery and any disruption in access to that data can delay care or jeopardize diagnosis.
The privacy and security of health information is strictly regulated in the U.S. under federal laws, such as the Health Insurance Portability and Accountability Act of 1996 (HIPAA), but also through various state laws and laws protecting individuals against discrimination based on genetic data….
For health care providers and insurers, there is typically no limitation for patients to disclose information about their health. Just as any patient can (and mostly should) share concerns about their health with family and friends, any patient can now easily share anything they want with the world via social media or join an online support group. Although these are generally positive steps that help an individual with health concerns find support and receive advice, we now need to be much more conscious about what
However, concerns about your health care provider’s ability to protect your data should not lead to patients withholding information. Even in this digital age, the patient-doctor trust relationship is still the most important aspect of our health care system – and that trust goes both ways: patients need to trust their providers with often intimate and personal information, and providers need to know that their patients are not withholding anything due to privacy concerns.
We have entered the new age of digital medicine and almost universal availability of information, leading to better diagnosis and more successful treatments, ultimately reducing suffering and extending lives. However, this great opportunity also comes with new risks and we all – health care providers and patients alike – need to be conscious about how we use this new technology and share information…. https://staysafeonline.org/blog/health-information-privacy-care/

Resources:

Artificial Intelligence Will Redesign Healthcare https://medicalfuturist.com/artificial-intelligence-will-redesign-healthcare

9 Ways Artificial Intelligence is Affecting the Medical Field https://www.healthcentral.com/slideshow/8-ways-artificial-intelligence-is-affecting-the-medical-field#slide=2

Where information leads to Hope. © Dr. Wilda.com

Dr. Wilda says this about that ©

Blogs by Dr. Wilda:

COMMENTS FROM AN OLD FART©
http://drwildaoldfart.wordpress.com/

Dr. Wilda Reviews ©
http://drwildareviews.wordpress.com/

Dr. Wilda ©
https://drwilda.com/

McGill University study: AI could predict cognitive decline leading to Alzheimer’s disease in the next five years

7 Oct

The National Institute on Aging described Alzheimer’s disease in What Is Alzheimer’s Disease?:

Alzheimer’s disease is an irreversible, progressive brain disorder that slowly destroys memory and thinking skills and, eventually, the ability to carry out the simplest tasks. In most people with the disease—those with the late-onset type—symptoms first appear in their mid-60s. Early-onset Alzheimer’s occurs between a person’s 30s and mid-60s and is very rare. Alzheimer’s disease is the most common cause of dementia among older adults.
The disease is named after Dr. Alois Alzheimer. In 1906, Dr. Alzheimer noticed changes in the brain tissue of a woman who had died of an unusual mental illness. Her symptoms included memory loss, language problems, and unpredictable behavior. After she died, he examined her brain and found many abnormal clumps (now called amyloid plaques) and tangled bundles of fibers (now called neurofibrillary, or tau, tangles).
These plaques and tangles in the brain are still considered some of the main features of Alzheimer’s disease. Another feature is the loss of connections between nerve cells (neurons) in the brain. Neurons transmit messages between different parts of the brain, and from the brain to muscles and organs in the body. Many other complex brain changes are thought to play a role in Alzheimer’s, too.
This damage initially appears to take place in the hippocampus, the part of the brain essential in forming memories. As neurons die, additional parts of the brain are affected. By the final stage of Alzheimer’s, damage is widespread, and brain tissue has shrunk significantly.
How Many Americans Have Alzheimer’s Disease?
Estimates vary, but experts suggest that as many as 5.5 million Americans age 65 and older may have Alzheimer’s. Many more under age 65 also have the disease. Unless Alzheimer’s can be effectively treated or prevented, the number of people with it will increase significantly if current population trends continue. This is because increasing age is the most important known risk factor for Alzheimer’s disease.
What Does Alzheimer’s Disease Look Like?
Memory problems are typically one of the first signs of Alzheimer’s, though initial symptoms may vary from person to person. A decline in other aspects of thinking, such as finding the right words, vision/spatial issues, and impaired reasoning or judgment, may also signal the very early stages of Alzheimer’s disease. Mild cognitive impairment (MCI) is a condition that can be an early sign of Alzheimer’s, but not everyone with MCI will develop the disease.
People with Alzheimer’s have trouble doing everyday things like driving a car, cooking a meal, or paying bills. They may ask the same questions over and over, get lost easily, lose things or put them in odd places, and find even simple things confusing. As the disease progresses, some people become worried, angry, or violent…. https://www.nia.nih.gov/health/what-alzheimers-disease

Artificial Intelligence (AI) might provide clues to the early detection of Alzheimer’s.

Live Science described AI in What Is Artificial Intelligence?:

One of the standard textbooks in the field, by University of California computer scientists Stuart Russell and Google’s director of research, Peter Norvig, puts artificial intelligence in to four broad categories:
The differences between them can be subtle, notes Ernest Davis, a professor of computer science at New York University. AlphaGo, the computer program that beat a world champion at Go, acts rationally when it plays the game (it plays to win). But it doesn’t necessarily think the way a human being does, though it engages in some of the same pattern-recognition tasks. Similarly, a machine that acts like a human doesn’t necessarily bear much resemblance to people in the way it processes information.
• machines that think like humans,
• machines that act like humans,
• machines that think rationally,
• machines that act rationally.

Even IBM’s Watson, which acted somewhat like a human when playing Jeopardy, wasn’t using anything like the rational processes humans use.
Tough tasks
Davis says he uses another definition, centered on what one wants a computer to do. “There are a number of cognitive tasks that people do easily — often, indeed, with no conscious thought at all — but that are extremely hard to program on computers. Archetypal examples are vision and natural language understanding. Artificial intelligence, as I define it, is the study of getting computers to carry out these tasks,” he said….
Computer vision has made a lot of strides in the past decade — cameras can now recognize faces Other tasks, though, are proving tougher. For example, Davis and NYU psychology professor Gary Marcus wrote in the Communications of the Association for Computing Machinery of “common sense” tasks that computers find very difficult. A robot serving drinks, for example, can be programmed to recognize a request for one, and even to manipulate a glass and pour one. But if a fly lands in the glass the computer still has a tough time deciding whether to pour the drink in and serve it (or not).

Common sense
The issue is that much of “common sense” is very hard to model. Computer scientists have taken several approaches to get around that problem. IBM’s Watson, for instance, was able to do so well on Jeopardy! because it had a huge database of knowledge to work with and a few rules to string words together to make questions and answers. Watson, though, would have a difficult time with a simple open-ended conversation.
Beyond tasks, though, is the issue of learning. Machines can learn, said Kathleen McKeown, a professor of computer science at Columbia University. “Machine learning is a kind of AI,” she said.
Some machine learning works in a way similar to the way people do it, she noted. Google Translate, for example, uses a large corpus of text in a given language to translate to another language, a statistical process that doesn’t involve looking for the “meaning” of words. Humans, she said, do something similar, in that we learn languages by seeing lots of examples.
That said, Google Translate doesn’t always get it right, precisely because it doesn’t seek meaning and can sometimes be fooled by synonyms or differing connotations….
The upshot is AIs that can handle certain tasks well exist, as do AIs that look almost human because they have a large trove of data to work with. Computer scientists have been less successful coming up with an AI that can think the way we expect a human being to, or to act like a human in more than very limited situations…. https://www.livescience.com/55089-artificial-intelligence.html

AI might prove useful in diagnosing cognitive decline leading to Alzheimer’s.

Science Daily reported in AI could predict cognitive decline leading to Alzheimer’s disease in the next five years:

A team of scientists has successfully trained a new artificial intelligence (AI) algorithm to make accurate predictions regarding cognitive decline leading to Alzheimer’s disease.
Dr. Mallar Chakravarty, a computational neuroscientist at the Douglas Mental Health University Institute, and his colleagues from the University of Toronto and the Centre for Addiction and Mental Health, designed an algorithm that learns signatures from magnetic resonance imaging (MRI), genetics, and clinical data. This specific algorithm can help predict whether an individual’s cognitive faculties are likely to deteriorate towards Alzheimer’s in the next five years.
“At the moment, there are limited ways to treat Alzheimer’s and the best evidence we have is for prevention. Our AI methodology could have significant implications as a ‘doctor’s assistant’ that would help stream people onto the right pathway for treatment. For example, one could even initiate lifestyle changes that may delay the beginning stages of Alzheimer’s or even prevent it altogether,” says Chakravarty, an Assistant Professor in McGill University’s Department of Psychiatry.
The findings, published in PLOS Computational Biology, used data from the Alzheimer’s Disease NeuroImaging Initiative. The researchers trained their algorithms using data from more than 800 people ranging from normal healthy seniors to those experiencing mild cognitive impairment, and Alzheimer’s disease patients. They replicated their results within the study on an independently collected sample from the Australian Imaging and Biomarkers Lifestyle Study of Ageing.
Can the predictions be improved with more data?
“We are currently working on testing the accuracy of predictions using new data. It will help us to refine predictions and determine if we can predict even farther into the future,” says Chakravarty. With more data, the scientists would be able to better identify those in the population at greatest risk for cognitive decline leading to Alzheimer’s.
According to the Alzheimer Society of Canada, 564,000 Canadians had Alzheimer’s or another form of dementia in 2016. The figure will rise to 937,000 within 15 years.
Worldwide, around 50million people have dementia and the total number is projected to reach 82million in 2030 and 152 in 2050, according to the World Health Organization. Alzheimer’s disease, the most common form of dementia, may contribute to 60-70% of cases. Presently, there is no truly effective treatment for this disease…. https://www.sciencedaily.com/releases/2018/10/181004155421.htm

Citation:

AI could predict cognitive decline leading to Alzheimer’s disease in the next five years
Algorithms may help doctors stream people onto prevention path sooner
Date: October 4, 2018
Source: McGill University
Summary:
A team of scientists has successfully trained a new artificial intelligence (AI) algorithm to make accurate predictions regarding cognitive decline leading to Alzheimer’s disease.

Journal Reference:
Nikhil Bhagwat, Joseph D. Viviano, Aristotle N. Voineskos, M. Mallar Chakravarty. Modeling and prediction of clinical symptom trajectories in Alzheimer’s disease using longitudinal data. PLOS Computational Biology, 2018; 14 (9): e1006376 DOI: 10.1371/journal.pcbi.1006376

Here is the press release from McGill University:

AI Could Predict Cognitive Decline Leading to Alzheimer’s Disease in the Next 5 Years
News
Algorithms may help doctors stream people onto prevention path sooner
PUBLISHED: 4OCT2018
A team of scientists has successfully trained a new artificial intelligence (AI) algorithm to make accurate predictions regarding cognitive decline leading to Alzheimer’s disease.
Dr. Mallar Chakravarty, a computational neuroscientist at the Douglas Mental Health University Institute, and his colleagues from the University of Toronto and the Centre for Addiction and Mental Health, designed an algorithm that learns signatures from magnetic resonance imaging (MRI), genetics, and clinical data. This specific algorithm can help predict whether an individual’s cognitive faculties are likely to deteriorate towards Alzheimer’s in the next five years.
“At the moment, there are limited ways to treat Alzheimer’s and the best evidence we have is for prevention. Our AI methodology could have significant implications as a ‘doctor’s assistant’ that would help stream people onto the right pathway for treatment. For example, one could even initiate lifestyle changes that may delay the beginning stages of Alzheimer’s or even prevent it altogether,” says Chakravarty, an Assistant Professor in McGill University’s Department of Psychiatry.
The findings, published in PLOS Computational Biology, used data from the Alzheimer’s Disease NeuroImaging Initiative. The researchers trained their algorithms using data from more than 800 people ranging from normal healthy seniors to those experiencing mild cognitive impairment, and Alzheimer’s disease patients. They replicated their results within the study on an independently collected sample from the Australian Imaging and Biomarkers Lifestyle Study of Ageing.
Can the predictions be improved with more data?
“We are currently working on testing the accuracy of predictions using new data. It will help us to refine predictions and determine if we can predict even farther into the future,” says Chakravarty. With more data, the scientists would be able to better identify those in the population at greatest risk for cognitive decline leading to Alzheimer’s.
According to the Alzheimer Society of Canada, 564,000 Canadians had Alzheimer’s or another form of dementia in 2016. The figure will rise to 937,000 within 15 years.
Worldwide, around 50million people have dementia and the total number is projected to reach 82million in 2030 and 152 in 2050, according to the World Health Organization. Alzheimer’s disease, the most common form of dementia, may contribute to 60–70% of cases. Presently, there is no truly effective treatment for this disease.

This work was funded by the Canadian Institutes of Health Research, the Natural Sciences andEngineering Research Council of Canada, the Fonds de recherche du Québec—Santé, Weston Brain Institute, Michael J. Fox Foundation for Parkinson’s Research, Alzheimer’s Society, Brain Canada, and the McGill University Healthy Brains for Healthy Lives – Canada First Research Excellence Fund.
The article “Modeling and prediction of clinical symptom trajectories in Alzheimer’s disease” was published in PLOS Computational Biology
For information and interviews
Bruno Geoffroy
Press Information Officer – Media Relations Office
CIUSSS de l’Ouest-de-l’Île-de-Montréal (Douglas Mental Health University Institute)
Tel.: 514-630-2225, ext. 5257 //relations.medias.comtl [at] ssss.gouv.qc.ca”>relations.medias.comtl@ssss.gouv.qc.ca

Alzheimer’s and Dementia Alliance of Wisconsin described why early detection is important:

Early diagnosis is key.
There are at least a dozen advantages to obtaining an early and accurate diagnosis when cognitive symptoms are first noticed.
1. Your symptoms might be reversible.
The symptoms you are concerned about might be caused by a condition that is reversible. And even if there is also an underlying dementia such as Alzheimer’s disease, diagnosis and treatment of reversible conditions can improve brain function and reduce symptoms.

2. It may be treatable.
Some causes of cognitive decline are not reversible, but might be treatable. Appropriate treatment can stop or slow the rate of further decline.
3. With treatments, the sooner the better.
Treatment of Alzheimer’s and other dementia-causing diseases is typically most effective when started early in the disease process. Once more effective treatments become available, obtaining an early and accurate diagnosis will be even more crucial.

4. Diagnoses are more accurate early in the disease process.
A more accurate diagnosis is possible when a complete history can be taken early in the disease process, while the person is still able to answer questions and report concerns and when observers can still recall the order in which symptoms first appeared. Obtaining an accurate diagnosis can be difficult once most of the brain has become affected.
5. It’s empowering.
An earlier diagnosis enables the person to participate in their own legal, financial, and long-term care planning and to make their wishes known to family members.
6. You can focus on what’s important to you.
It allows the person the opportunity to reprioritize how they spend their time – focusing on what matters most to them – perhaps completing life goals such as travel, recording family history, completing projects, or making memories with grandchildren while they still can.
7. You can make your best choices.
Early diagnosis can prevent unwise choices that might otherwise be made in ignorance – such as moving far away from family and friends, or making legal or financial commitments that will be hard to keep as the disease progresses.
8. You can use the resources available to you.
Individuals diagnosed early in the disease process can take advantage of early-stage support groups and learn tips and strategies to better manage and cope with the symptoms of the disease.
9. Participate or advocate for research.
Those diagnosed early can also take advantage of clinical trials – or advocate for more research and improved care and opportunities.
10. You can further people’s understanding of the disease.
Earlier diagnosis helps to reduce the stigma associated with the disease when we learn to associate the disease with people in the early stages, when they are still cogent and active in the community.
11. It will help your family.
An earlier diagnosis gives families more opportunity to learn about the disease, develop realistic expectations, and plan for their future together – which can result in reduced stress and feelings of burden and regret later in the disease process.
12. It will help you, too.
Early diagnosis allows the person and family to attribute cognitive changes to the disease rather than to personal failings – preserving the person’s ego throughout the disease process….                             https://alzwisc.org/Importance%20of%20an%20early%20diagnosis.htm

AI’s role in treatment of Alzheimer’s is an example of better living through technology.

Resources:
What Is Alzheimer’s?                                                                            https://www.alz.org/alzheimers-dementia/what-is-alzheimers

Understanding Alzheimer’s Disease: the Basics https://www.webmd.com/alzheimers/guide/understanding-alzheimers-disease-basics

What’s to know about Alzheimer’s disease? https://www.medicalnewstoday.com/articles/159442.php

Alzheimer’s Disease                                         https://www.cdc.gov/aging/aginginfo/alzheimers.htm

What is Artificial Intelligence? https://www.computerworld.com/article/2906336/emerging-technology/what-is-artificial-intelligence.html

Artificial Intelligence: What it is and why it matters https://www.sas.com/en_us/insights/analytics/what-is-artificial-intelligence.html
Brain                                                                                                            https://drwilda.com/tag/brain/

Where information leads to Hope. © Dr. Wilda.com

Dr. Wilda says this about that ©

Blogs by Dr. Wilda:

COMMENTS FROM AN OLD FART©
http://drwildaoldfart.wordpress.com/

Dr. Wilda Reviews ©
http://drwildareviews.wordpress.com/

Dr. Wilda ©
https://drwilda.com/