Tag Archives: Live Science

NYU Langone Health / NYU School of Medicine study: Artificial intelligence can diagnose PTSD by analyzing voices

23 Apr

Live Science described AI in What Is Artificial Intelligence?:

One of the standard textbooks in the field, by University of California computer scientists Stuart Russell and Google’s director of research, Peter Norvig, puts artificial intelligence in to four broad categories:
The differences between them can be subtle, notes Ernest Davis, a professor of computer science at New York University. AlphaGo, the computer program that beat a world champion at Go, acts rationally when it plays the game (it plays to win). But it doesn’t necessarily think the way a human being does, though it engages in some of the same pattern-recognition tasks. Similarly, a machine that acts like a human doesn’t necessarily bear much resemblance to people in the way it processes information.
• machines that think like humans,
• machines that act like humans,
• machines that think rationally,
• machines that act rationally.
Even IBM’s Watson, which acted somewhat like a human when playing Jeopardy, wasn’t using anything like the rational processes humans use.
Tough tasks
Davis says he uses another definition, centered on what one wants a computer to do. “There are a number of cognitive tasks that people do easily — often, indeed, with no conscious thought at all — but that are extremely hard to program on computers. Archetypal examples are vision and natural language understanding. Artificial intelligence, as I define it, is the study of getting computers to carry out these tasks,” he said….
Computer vision has made a lot of strides in the past decade — cameras can now recognize faces Other tasks, though, are proving tougher. For example, Davis and NYU psychology professor Gary Marcus wrote in the Communications of the Association for Computing Machinery of “common sense” tasks that computers find very difficult. A robot serving drinks, for example, can be programmed to recognize a request for one, and even to manipulate a glass and pour one. But if a fly lands in the glass the computer still has a tough time deciding whether to pour the drink in and serve it (or not).
Common sense
The issue is that much of “common sense” is very hard to model. Computer scientists have taken several approaches to get around that problem. IBM’s Watson, for instance, was able to do so well on Jeopardy! because it had a huge database of knowledge to work with and a few rules to string words together to make questions and answers. Watson, though, would have a difficult time with a simple open-ended conversation.
Beyond tasks, though, is the issue of learning. Machines can learn, said Kathleen McKeown, a professor of computer science at Columbia University. “Machine learning is a kind of AI,” she said.
Some machine learning works in a way similar to the way people do it, she noted. Google Translate, for example, uses a large corpus of text in a given language to translate to another language, a statistical process that doesn’t involve looking for the “meaning” of words. Humans, she said, do something similar, in that we learn languages by seeing lots of examples.
That said, Google Translate doesn’t always get it right, precisely because it doesn’t seek meaning and can sometimes be fooled by synonyms or differing connotations….
The upshot is AIs that can handle certain tasks well exist, as do AIs that look almost human because they have a large trove of data to work with. Computer scientists have been less successful coming up with an AI that can think the way we expect a human being to, or to act like a human in more than very limited situations…. https://www.livescience.com/55089-artificial-intelligence.html

NYU scientists used AI to diagnose PTSD which is short for Post-Traumatic Stress Disorder.

The National Institute of Mental Health defined PTSD:

Post-Traumatic Stress Disorder
Overview
PTSD is a disorder that develops in some people who have experienced a shocking, scary, or dangerous event.
It is natural to feel afraid during and after a traumatic situation. Fear triggers many split-second changes in the body to help defend against danger or to avoid it. This “fight-or-flight” response is a typical reaction meant to protect a person from harm. Nearly everyone will experience a range of reactions after trauma, yet most people recover from initial symptoms naturally. Those who continue to experience problems may be diagnosed with PTSD. People who have PTSD may feel stressed or frightened even when they are not in danger.
Signs and Symptoms
Not every traumatized person develops ongoing (chronic) or even short-term (acute) PTSD. Not everyone with PTSD has been through a dangerous event. Some experiences, like the sudden, unexpected death of a loved one, can also cause PTSD. Symptoms usually begin early, within 3 months of the traumatic incident, but sometimes they begin years afterward. Symptoms must last more than a month and be severe enough to interfere with relationships or work to be considered PTSD. The course of the illness varies. Some people recover within 6 months, while others have symptoms that last much longer. In some people, the condition becomes chronic.
A doctor who has experience helping people with mental illnesses, such as a psychiatrist or psychologist, can diagnose PTSD.
To be diagnosed with PTSD, an adult must have all of the following for at least 1 month:
• At least one re-experiencing symptom
• At least one avoidance symptom
• At least two arousal and reactivity symptoms
• At least two cognition and mood symptoms
Re-experiencing symptoms include:
• Flashbacks—reliving the trauma over and over, including physical symptoms like a racing heart or sweating
• Bad dreams
• Frightening thoughts
Re-experiencing symptoms may cause problems in a person’s everyday routine. The symptoms can start from the person’s own thoughts and feelings. Words, objects, or situations that are reminders of the event can also trigger re-experiencing symptoms.
Avoidance symptoms include:
• Staying away from places, events, or objects that are reminders of the traumatic experience
• Avoiding thoughts or feelings related to the traumatic event
Things that remind a person of the traumatic event can trigger avoidance symptoms. These symptoms may cause a person to change his or her personal routine. For example, after a bad car accident, a person who usually drives may avoid driving or riding in a car.
Arousal and reactivity symptoms include:
• Being easily startled
• Feeling tense or “on edge”
• Having difficulty sleeping
• Having angry outbursts
Arousal symptoms are usually constant, instead of being triggered by things that remind one of the traumatic events. These symptoms can make the person feel stressed and angry. They may make it hard to do daily tasks, such as sleeping, eating, or concentrating.
Cognition and mood symptoms include:
• Trouble remembering key features of the traumatic event
• Negative thoughts about oneself or the world
• Distorted feelings like guilt or blame
• Loss of interest in enjoyable activities
Cognition and mood symptoms can begin or worsen after the traumatic event, but are not due to injury or substance use. These symptoms can make the person feel alienated or detached from friends or family members.
It is natural to have some of these symptoms after a dangerous event. Sometimes people have very serious symptoms that go away after a few weeks. This is called acute stress disorder, or ASD. When the symptoms last more than a month, seriously affect one’s ability to function, and are not due to substance use, medical illness, or anything except the event itself, they might be PTSD. Some people with PTSD don’t show any symptoms for weeks or months. PTSD is often accompanied by depression, substance abuse, or one or more of the other anxiety disorders….
https://www.nimh.nih.gov/health/topics/post-traumatic-stress-disorder-ptsd/index.shtml

See, Recognizing PTSD Early Warning Signs, Matthew Tull, PhD https://www.verywellmind.com/recognizing-ptsd-early-warning-signs-2797569

Science Daily reported in Artificial intelligence can diagnose PTSD by analyzing voices:

A specially designed computer program can help diagnose post-traumatic stress disorder (PTSD) in veterans by analyzing their voices, a new study finds.
Published online April 22 in the journal Depression and Anxiety, the study found that an artificial intelligence tool can distinguish — with 89 percent accuracy — between the voices of those with or without PTSD.
“Our findings suggest that speech-based characteristics can be used to diagnose this disease, and with further refinement and validation, may be employed in the clinic in the near future,” says senior study author Charles R. Marmar, MD, the Lucius N. Littauer Professor and chair of the Department of Psychiatry at NYU School of Medicine.
More than 70 percent of adults worldwide experience a traumatic event at some point in their lives, with up to 12 percent of people in some struggling countries suffering from PTSD. Those with the condition experience strong, persistent distress when reminded of a triggering event.
The study authors say that a PTSD diagnosis is most often determined by clinical interview or a self-report assessment, both inherently prone to biases. This has led to efforts to develop objective, measurable, physical markers of PTSD progression, much like laboratory values for medical conditions, but progress has been slow.
Learning How to Learn
In the current study, the research team used a statistical/machine learning technique, called random forests, that has the ability to “learn” how to classify individuals based on examples. Such AI programs build “decision” rules and mathematical models that enable decision-making with increasing accuracy as the amount of training data grows.
The researchers first recorded standard, hours-long diagnostic interviews, called Clinician-Administered PTSD Scale, or CAPS, of 53 Iraq and Afghanistan veterans with military-service-related PTSD, as well as those of 78 veterans without the disease. The recordings were then fed into voice software from SRI International — the institute that also invented Siri — to yield a total of 40,526 speech-based features captured in short spurts of talk, which the team’s AI program sifted through for patterns.
The random forest program linked patterns of specific voice features with PTSD, including less clear speech and a lifeless, metallic tone, both of which had long been reported anecdotally as helpful in diagnosis. While the current study did not explore the disease mechanisms behind PTSD, the theory is that traumatic events change brain circuits that process emotion and muscle tone, which affects a person’s voice.
Moving forward, the research team plans to train the AI voice tool with more data, further validate it on an independent sample, and apply for government approval to use the tool clinically.
“Speech is an attractive candidate for use in an automated diagnostic system, perhaps as part of a future PTSD smartphone app, because it can be measured cheaply, remotely, and non-intrusively,” says lead author Adam Brown, PhD, adjunct assistant professor in the Department of Psychiatry at NYU School of Medicine.
“The speech analysis technology used in the current study on PTSD detection falls into the range of capabilities included in our speech analytics platform called SenSay Analytics™,” says Dimitra Vergyri, director of SRI International’s Speech Technology and Research (STAR) Laboratory. “The software analyzes words — in combination with frequency, rhythm, tone, and articulatory characteristics of speech — to infer the state of the speaker, including emotion, sentiment, cognition, health, mental health and communication quality. The technology has been involved in a series of industry applications visible in startups like Oto, Ambit and Decoded Health.” https://www.sciencedaily.com/releases/2019/04/190422082232.htm

Citation:

Artificial intelligence can diagnose PTSD by analyzing voices
Study tests potential telemedicine approach
Date: April 22, 2019
Source: NYU Langone Health / NYU School of Medicine
Summary:
A specially designed computer program can help to diagnose post-traumatic stress disorder (PTSD) in veterans by analyzing their voices.

Speech‐based markers for posttraumatic stress disorder in US veterans
First published: 22 April 2019
https://doi.org/10.1002/da.22890
Preliminary findings from this study were presented at the 16th annual conference of the International Speech Communication Association, Dresden, Germany, September 6–10, 2015.
Charles R. Marmar
Corresponding Author
E-mail address: Charles.Marmar@nyulangone.org
http://orcid.org/0000-0001-8427-5607
Department of Psychiatry, New York University School of Medicine, New York, New York
Steven and Alexandra Cohen Veterans Center for the Study of Post‐Traumatic Stress and Traumatic Brain Injury, New York, New York
Marmar and Brown should be have considered joint first authors.
Correspondence Charles R. Marmar, M.D., Department of Psychiatry, New York University School of Medicine, 1 Park Avenue, New York, NY 10016. Email: Charles.Marmar@nyulangone.org
Background
The diagnosis of posttraumatic stress disorder (PTSD) is usually based on clinical interviews or self‐report measures. Both approaches are subject to under‐ and over‐reporting of symptoms. An objective test is lacking. We have developed a classifier of PTSD based on objective speech‐marker features that discriminate PTSD cases from controls.
Methods
Speech samples were obtained from warzone‐exposed veterans, 52 cases with PTSD and 77 controls, assessed with the Clinician‐Administered PTSD Scale. Individuals with major depressive disorder (MDD) were excluded. Audio recordings of clinical interviews were used to obtain 40,526 speech features which were input to a random forest (RF) algorithm.
Results
The selected RF used 18 speech features and the receiver operating characteristic curve had an area under the curve (AUC) of 0.954. At a probability of PTSD cut point of 0.423, Youden’s index was 0.787, and overall correct classification rate was 89.1%. The probability of PTSD was higher for markers that indicated slower, more monotonous speech, less change in tonality, and less activation. Depression symptoms, alcohol use disorder, and TBI did not meet statistical tests to be considered confounders.
Conclusions
This study demonstrates that a speech‐based algorithm can objectively differentiate PTSD cases from controls. The RF classifier had a high AUC. Further validation in an independent sample and appraisal of the classifier to identify those with MDD only compared with those with PTSD comorbid with MDD is required.

Here is the press release from NYU:

NEWS RELEASE 22-APR-2019
Artificial intelligence can diagnose PTSD by analyzing voices
Study tests potential telemedicine approach
NYU LANGONE HEALTH / NYU SCHOOL OF MEDICINE
VIDEO: NYU School of Medicine researchers say artificial intelligence could be used to diagnose PTSD by analyzing voices. view more
Credit: NYU School of Medicine
A specially designed computer program can help diagnose post-traumatic stress disorder (PTSD) in veterans by analyzing their voices, a new study finds.
Published online April 22 in the journal Depression and Anxiety, the study found that an artificial intelligence tool can distinguish – with 89 percent accuracy – between the voices of those with or without PTSD.
“Our findings suggest that speech-based characteristics can be used to diagnose this disease, and with further refinement and validation, may be employed in the clinic in the near future,” says senior study author Charles R. Marmar, MD, the Lucius N. Littauer Professor and chair of the Department of Psychiatry at NYU School of Medicine.
More than 70 percent of adults worldwide experience a traumatic event at some point in their lives, with up to 12 percent of people in some struggling countries suffering from PTSD. Those with the condition experience strong, persistent distress when reminded of a triggering event.
The study authors say that a PTSD diagnosis is most often determined by clinical interview or a self-report assessment, both inherently prone to biases. This has led to efforts to develop objective, measurable, physical markers of PTSD progression, much like laboratory values for medical conditions, but progress has been slow.
Learning How to Learn
In the current study, the research team used a statistical/machine learning technique, called random forests, that has the ability to “learn” how to classify individuals based on examples. Such AI programs build “decision” rules and mathematical models that enable decision-making with increasing accuracy as the amount of training data grows.
The researchers first recorded standard, hours-long diagnostic interviews, called Clinician-Administered PTSD Scale, or CAPS, of 53 Iraq and Afghanistan veterans with military-service-related PTSD, as well as those of 78 veterans without the disease. The recordings were then fed into voice software from SRI International – the institute that also invented Siri – to yield a total of 40,526 speech-based features captured in short spurts of talk, which the team’s AI program sifted through for patterns.
The random forest program linked patterns of specific voice features with PTSD, including less clear speech and a lifeless, metallic tone, both of which had long been reported anecdotally as helpful in diagnosis. While the current study did not explore the disease mechanisms behind PTSD, the theory is that traumatic events change brain circuits that process emotion and muscle tone, which affects a person’s voice.
Moving forward, the research team plans to train the AI voice tool with more data, further validate it on an independent sample, and apply for government approval to use the tool clinically.
“Speech is an attractive candidate for use in an automated diagnostic system, perhaps as part of a future PTSD smartphone app, because it can be measured cheaply, remotely, and non-intrusively,” says lead author Adam Brown, PhD, adjunct assistant professor in the Department of Psychiatry at NYU School of Medicine.
“The speech analysis technology used in the current study on PTSD detection falls into the range of capabilities included in our speech analytics platform called SenSay Analytics™,” says Dimitra Vergyri, director of SRI International’s Speech Technology and Research (STAR) Laboratory. “The software analyzes words – in combination with frequency, rhythm, tone, and articulatory characteristics of speech – to infer the state of the speaker, including emotion, sentiment, cognition, health, mental health and communication quality. The technology has been involved in a series of industry applications visible in startups like Oto, Ambit and Decoded Health.”
###
Along with Marmar and Brown, authors of the study from the Department of Psychiatry were Meng Qian, Eugene Laska, Carole Siegel, Meng Li, and Duna Abu-Amara. Study authors from SRI International were Andreas Tsiartas, Dimitra Vergyri, Colleen Richey, Jennifer Smith, and Bruce Knoth. Brown is also an associate professor of psychology at the New School for Social Research.
The study was supported by the U.S. Army Medical Research & Acquisition Activity (USAMRAA) and Telemedicine & Advanced Technology Research Center (TATRC) grant W81XWH- ll-C-0004, as well as by the Steven and Alexandra Cohen Foundation.
Media Inquiries:
Jim Mandler
(212) 404-3500
jim.mandler@nyulangone.org
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Resources:

Artificial Intelligence Will Redesign Healthcare                     https://medicalfuturist.com/artificial-intelligence-will-redesign-healthcare

9 Ways Artificial Intelligence is Affecting the Medical Field https://www.healthcentral.com/slideshow/8-ways-artificial-intelligence-is-affecting-the-medical-field#slide=2

Where information leads to Hope. © Dr. Wilda.com

Dr. Wilda says this about that ©

Blogs by Dr. Wilda:

COMMENTS FROM AN OLD FART©
http://drwildaoldfart.wordpress.com/

Dr. Wilda Reviews ©
http://drwildareviews.wordpress.com/

Dr. Wilda ©
https://drwilda.com/

University of Washington study: New houseplant can clean your home’s air

1 Jan

Elizabeth Palermo wrote in the Live Science article, Do Indoor Plants Really Clean the Air?

Sure, that potted fern is pretty, but can it really spruce up the air quality in your home? Studies by scientists at NASA, Pennsylvania State University, the University of Georgia and other respected institutions suggest that it can.
Plants are notoriously adept at absorbing gases through pores on the surface of their leaves. It’s this skill that facilitates photosynthesis, the process by which plants convert light energy and carbon dioxide into chemical energy to fuel growth.
But scientists studying the air-purification capacities of indoor plants have found that plants can absorb many other gases in addition to carbon dioxide, including a long list of volatile organic compounds (VOCs). Benzene (found in some plastics, fabrics, pesticides and cigarette smoke) and formaldehyde (found in some cosmetics, dish detergent, fabric softener and carpet cleaner) are examples of common indoor VOCs that plants help eliminate…. https://www.livescience.com/38445-indoor-plants-clean-air.html

Research from the University of Washington supports Palermo’s article.

Science Daily reported in New houseplant can clean your home’s air:

We like to keep the air in our homes as clean as possible, and sometimes we use HEPA air filters to keep offending allergens and dust particles at bay.
But some hazardous compounds are too small to be trapped in these filters. Small molecules like chloroform, which is present in small amounts in chlorinated water, or benzene, which is a component of gasoline, build up in our homes when we shower or boil water, or when we store cars or lawn mowers in attached garages. Both benzene and chloroform exposure have been linked to cancer.
Now researchers at the University of Washington have genetically modified a common houseplant — pothos ivy — to remove chloroform and benzene from the air around it. The modified plants express a protein, called 2E1, that transforms these compounds into molecules that the plants can then use to support their own growth. The team will publish its findings Wednesday, Dec. 19 in Environmental Science & Technology….
The team decided to use a protein called cytochrome P450 2E1, or 2E1 for short, which is present in all mammals, including humans. In our bodies, 2E1 turns benzene into a chemical called phenol and chloroform into carbon dioxide and chloride ions. But 2E1 is located in our livers and is turned on when we drink alcohol. So it’s not available to help us process pollutants in our air….
The researchers made a synthetic version of the gene that serves as instructions for making the rabbit form of 2E1. Then they introduced it into pothos ivy so that each cell in the plant expressed the protein. Pothos ivy doesn’t flower in temperate climates so the genetically modified plants won’t be able to spread via pollen.
“This whole process took more than two years,” said lead author Long Zhang, who is a research scientist in the civil and environmental engineering department. “That is a long time, compared to other lab plants, which might only take a few months. But we wanted to do this in pothos because it’s a robust houseplant that grows well under all sort of conditions.”
The researchers then tested how well their modified plants could remove the pollutants from air compared to normal pothos ivy. They put both types of plants in glass tubes and then added either benzene or chloroform gas into each tube. Over 11 days, the team tracked how the concentration of each pollutant changed in each tube.
For the unmodified plants, the concentration of either gas didn’t change over time. But for the modified plants, the concentration of chloroform dropped by 82 percent after three days, and it was almost undetectable by day six. The concentration of benzene also decreased in the modified plant vials, but more slowly: By day eight, the benzene concentration had dropped by about 75 percent…. https://www.sciencedaily.com/releases/2018/12/181219093911.htm

Citation:

New houseplant can clean your home’s air
Date: December 19, 2018
Source: University of Washington
Summary:
Researchers have genetically modified a common houseplant to remove chloroform and benzene from the air around it.
Journal Reference:
Long Zhang, Ryan Routsong, Stuart E. Strand. Greatly Enhanced Removal of Volatile Organic Carcinogens by a Genetically Modified Houseplant, Pothos Ivy (Epipremnum aureum) Expressing the Mammalian Cytochrome P450 2e1 Gene. Environmental Science & Technology, 2018; DOI: 10.1021/acs.est.8b04811

Here is the press release from the University of Washington:

December 19, 2018
Researchers develop a new houseplant that can clean your home’s air
Sarah McQuate
UW News
We like to keep the air in our homes as clean as possible, and sometimes we use HEPA air filters to keep offending allergens and dust particles at bay.
But some hazardous compounds are too small to be trapped in these filters. Small molecules like chloroform, which is present in small amounts in chlorinated water, or benzene, which is a component of gasoline, build up in our homes when we shower or boil water, or when we store cars or lawn mowers in attached garages. Both benzene and chloroform exposure have been linked to cancer.
Now researchers at the University of Washington have genetically modified a common houseplant — pothos ivy — to remove chloroform and benzene from the air around it. The modified plants express a protein, called 2E1, that transforms these compounds into molecules that the plants can then use to support their own growth. The team published its findings Dec. 19 in Environmental Science & Technology.
“People haven’t really been talking about these hazardous organic compounds in homes, and I think that’s because we couldn’t do anything about them,” said senior author Stuart Strand, who is a research professor in the UW’s civil and environmental engineering department. “Now we’ve engineered houseplants to remove these pollutants for us.”
The team decided to use a protein called cytochrome P450 2E1, or 2E1 for short, which is present in all mammals, including humans. In our bodies, 2E1 turns benzene into a chemical called phenol and chloroform into carbon dioxide and chloride ions. But 2E1 is located in our livers and is turned on when we drink alcohol. So it’s not available to help us process pollutants in our air.
“We decided we should have this reaction occur outside of the body in a plant, an example of the ‘green liver’ concept,” Strand said. “And 2E1 can be beneficial for the plant, too. Plants use carbon dioxide and chloride ions to make their food, and they use phenol to help make components of their cell walls.”
The researchers made a synthetic version of the gene that serves as instructions for making the rabbit form of 2E1. Then they introduced it into pothos ivy so that each cell in the plant expressed the protein. Pothos ivy doesn’t flower in temperate climates so the genetically modified plants won’t be able to spread via pollen.
“This whole process took more than two years,” said lead author Long Zhang, who is a research scientist in the civil and environmental engineering department. “That is a long time, compared to other lab plants, which might only take a few months. But we wanted to do this in pothos because it’s a robust houseplant that grows well under all sort of conditions.”
The researchers then tested how well their modified plants could remove the pollutants from air compared to normal pothos ivy. They put both types of plants in glass tubes and then added either benzene or chloroform gas into each tube. Over 11 days, the team tracked how the concentration of each pollutant changed in each tube.
For the unmodified plants, the concentration of either gas didn’t change over time. But for the modified plants, the concentration of chloroform dropped by 82 percent after three days, and it was almost undetectable by day six. The concentration of benzene also decreased in the modified plant vials, but more slowly: By day eight, the benzene concentration had dropped by about 75 percent.
In order to detect these changes in pollutant levels, the researchers used much higher pollutant concentrations than are typically found in homes. But the team expects that the home levels would drop similarly, if not faster, over the same time frame.
Plants in the home would also need to be inside an enclosure with something to move air past their leaves, like a fan, Strand said.
“If you had a plant growing in the corner of a room, it will have some effect in that room,” he said. “But without air flow, it will take a long time for a molecule on the other end of the house to reach the plant.”
The team is currently working to increase the plants’ capabilities by adding a protein that can break down another hazardous molecule found in home air: formaldehyde, which is present in some wood products, such as laminate flooring and cabinets, and tobacco smoke.
“These are all stable compounds, so it’s really hard to get rid of them,” Strand said. “Without proteins to break down these molecules, we’d have to use high-energy processes to do it. It’s so much simpler and more sustainable to put these proteins all together in a houseplant.”
Civil and environmental engineering research technician Ryan Routsong is also a co-author. This research was funded by the National Science Foundation, Amazon Catalyst at UW and the National Institute of Environmental Health Sciences.
###
For more information, contact Strand at sstrand@uw.edu or 206-543-5350.

Joan Clark lists 17 plants that clean indoor air.

Clark wrote in 17 plants that clean indoor air:

Many people think that air pollution only consists of smog or car emissions. While these do constitute as outdoor air pollution, there is a much more dangerous kind of air pollution known as indoor air pollution.
Indoor air pollution is more hazardous than outdoor air pollution because it is a more concentrated type of pollution that is caused by inadequate ventilation, toxic products, humidity, and high temperatures. Thankfully, numerous plants clean the air.
• Aloe Vera
• Warneck Dracaena (Dracaena deremensis warneckii)
• Snake Plant (Sansevieria trifasciata)
• Golden Pothos (Scindapsus aureus)
• Chrysanthemum (Chrysantheium morifolium)
• Red-edged Dracaena (Dracaena marginata)
• Weeping Fig (Ficus benjamina)
• Chinese Evergreen (Aglaonema Crispum)
• Azalea (Rhododendron simsii)
• English Ivy (Hedera helix)
• Bamboo Palm (Chamaedorea sefritzii)
• Heart Leaf Philodendron (Philodendron oxycardium)
• Peace Lily (Spathiphyllum)
• Spider Plant (Chlorophytum comosum)
• Boston Fern (Nephrolepis exaltata)
• Gerbera Daisy (Gerbera jamesonii)
• Rubber Plant (Ficus elastica) https://www.tipsbulletin.com/plants-that-clean-the-air/

Consumer Reports noted that indoor air can be polluted.

Mary H.J. Farrell wrote in the Consumer Reports article, How to Improve Indoor Air Quality: The air in your house can be five times more polluted than the air outside:

Your windows may be spotless and your floors may sparkle, but for millions of adults and children with allergies, asthma, and other respiratory conditions, a house is only as clean as its air.
Though it might be hard to believe, ¬indoor air can be five times dirtier than what we breathe outside, exposing us to carcinogens, including radon and formaldehyde, as well as quotidian lung-gunking impurities, such as pollen, dust mites, pet dander, and a variety of particulate matter created when we burn candles or cook…. https://www.consumerreports.org/indoor-air-quality/how-to-improve-indoor-air-quality/

Using plants to clean indoor air pollution is part of a strategy to reduce indoor pollutants.

Where information leads to Hope. © Dr. Wilda.com

Dr. Wilda says this about that ©

Blogs by Dr. Wilda:

COMMENTS FROM AN OLD FART©                                              http://drwildaoldfart.wordpress.com/

Dr. Wilda Reviews ©
https://drwildareviews.wordpress.com/

Dr. Wilda ©
https://drwilda.com/

McGill University study: AI could predict cognitive decline leading to Alzheimer’s disease in the next five years

7 Oct

The National Institute on Aging described Alzheimer’s disease in What Is Alzheimer’s Disease?:

Alzheimer’s disease is an irreversible, progressive brain disorder that slowly destroys memory and thinking skills and, eventually, the ability to carry out the simplest tasks. In most people with the disease—those with the late-onset type—symptoms first appear in their mid-60s. Early-onset Alzheimer’s occurs between a person’s 30s and mid-60s and is very rare. Alzheimer’s disease is the most common cause of dementia among older adults.
The disease is named after Dr. Alois Alzheimer. In 1906, Dr. Alzheimer noticed changes in the brain tissue of a woman who had died of an unusual mental illness. Her symptoms included memory loss, language problems, and unpredictable behavior. After she died, he examined her brain and found many abnormal clumps (now called amyloid plaques) and tangled bundles of fibers (now called neurofibrillary, or tau, tangles).
These plaques and tangles in the brain are still considered some of the main features of Alzheimer’s disease. Another feature is the loss of connections between nerve cells (neurons) in the brain. Neurons transmit messages between different parts of the brain, and from the brain to muscles and organs in the body. Many other complex brain changes are thought to play a role in Alzheimer’s, too.
This damage initially appears to take place in the hippocampus, the part of the brain essential in forming memories. As neurons die, additional parts of the brain are affected. By the final stage of Alzheimer’s, damage is widespread, and brain tissue has shrunk significantly.
How Many Americans Have Alzheimer’s Disease?
Estimates vary, but experts suggest that as many as 5.5 million Americans age 65 and older may have Alzheimer’s. Many more under age 65 also have the disease. Unless Alzheimer’s can be effectively treated or prevented, the number of people with it will increase significantly if current population trends continue. This is because increasing age is the most important known risk factor for Alzheimer’s disease.
What Does Alzheimer’s Disease Look Like?
Memory problems are typically one of the first signs of Alzheimer’s, though initial symptoms may vary from person to person. A decline in other aspects of thinking, such as finding the right words, vision/spatial issues, and impaired reasoning or judgment, may also signal the very early stages of Alzheimer’s disease. Mild cognitive impairment (MCI) is a condition that can be an early sign of Alzheimer’s, but not everyone with MCI will develop the disease.
People with Alzheimer’s have trouble doing everyday things like driving a car, cooking a meal, or paying bills. They may ask the same questions over and over, get lost easily, lose things or put them in odd places, and find even simple things confusing. As the disease progresses, some people become worried, angry, or violent…. https://www.nia.nih.gov/health/what-alzheimers-disease

Artificial Intelligence (AI) might provide clues to the early detection of Alzheimer’s.

Live Science described AI in What Is Artificial Intelligence?:

One of the standard textbooks in the field, by University of California computer scientists Stuart Russell and Google’s director of research, Peter Norvig, puts artificial intelligence in to four broad categories:
The differences between them can be subtle, notes Ernest Davis, a professor of computer science at New York University. AlphaGo, the computer program that beat a world champion at Go, acts rationally when it plays the game (it plays to win). But it doesn’t necessarily think the way a human being does, though it engages in some of the same pattern-recognition tasks. Similarly, a machine that acts like a human doesn’t necessarily bear much resemblance to people in the way it processes information.
• machines that think like humans,
• machines that act like humans,
• machines that think rationally,
• machines that act rationally.

Even IBM’s Watson, which acted somewhat like a human when playing Jeopardy, wasn’t using anything like the rational processes humans use.
Tough tasks
Davis says he uses another definition, centered on what one wants a computer to do. “There are a number of cognitive tasks that people do easily — often, indeed, with no conscious thought at all — but that are extremely hard to program on computers. Archetypal examples are vision and natural language understanding. Artificial intelligence, as I define it, is the study of getting computers to carry out these tasks,” he said….
Computer vision has made a lot of strides in the past decade — cameras can now recognize faces Other tasks, though, are proving tougher. For example, Davis and NYU psychology professor Gary Marcus wrote in the Communications of the Association for Computing Machinery of “common sense” tasks that computers find very difficult. A robot serving drinks, for example, can be programmed to recognize a request for one, and even to manipulate a glass and pour one. But if a fly lands in the glass the computer still has a tough time deciding whether to pour the drink in and serve it (or not).

Common sense
The issue is that much of “common sense” is very hard to model. Computer scientists have taken several approaches to get around that problem. IBM’s Watson, for instance, was able to do so well on Jeopardy! because it had a huge database of knowledge to work with and a few rules to string words together to make questions and answers. Watson, though, would have a difficult time with a simple open-ended conversation.
Beyond tasks, though, is the issue of learning. Machines can learn, said Kathleen McKeown, a professor of computer science at Columbia University. “Machine learning is a kind of AI,” she said.
Some machine learning works in a way similar to the way people do it, she noted. Google Translate, for example, uses a large corpus of text in a given language to translate to another language, a statistical process that doesn’t involve looking for the “meaning” of words. Humans, she said, do something similar, in that we learn languages by seeing lots of examples.
That said, Google Translate doesn’t always get it right, precisely because it doesn’t seek meaning and can sometimes be fooled by synonyms or differing connotations….
The upshot is AIs that can handle certain tasks well exist, as do AIs that look almost human because they have a large trove of data to work with. Computer scientists have been less successful coming up with an AI that can think the way we expect a human being to, or to act like a human in more than very limited situations…. https://www.livescience.com/55089-artificial-intelligence.html

AI might prove useful in diagnosing cognitive decline leading to Alzheimer’s.

Science Daily reported in AI could predict cognitive decline leading to Alzheimer’s disease in the next five years:

A team of scientists has successfully trained a new artificial intelligence (AI) algorithm to make accurate predictions regarding cognitive decline leading to Alzheimer’s disease.
Dr. Mallar Chakravarty, a computational neuroscientist at the Douglas Mental Health University Institute, and his colleagues from the University of Toronto and the Centre for Addiction and Mental Health, designed an algorithm that learns signatures from magnetic resonance imaging (MRI), genetics, and clinical data. This specific algorithm can help predict whether an individual’s cognitive faculties are likely to deteriorate towards Alzheimer’s in the next five years.
“At the moment, there are limited ways to treat Alzheimer’s and the best evidence we have is for prevention. Our AI methodology could have significant implications as a ‘doctor’s assistant’ that would help stream people onto the right pathway for treatment. For example, one could even initiate lifestyle changes that may delay the beginning stages of Alzheimer’s or even prevent it altogether,” says Chakravarty, an Assistant Professor in McGill University’s Department of Psychiatry.
The findings, published in PLOS Computational Biology, used data from the Alzheimer’s Disease NeuroImaging Initiative. The researchers trained their algorithms using data from more than 800 people ranging from normal healthy seniors to those experiencing mild cognitive impairment, and Alzheimer’s disease patients. They replicated their results within the study on an independently collected sample from the Australian Imaging and Biomarkers Lifestyle Study of Ageing.
Can the predictions be improved with more data?
“We are currently working on testing the accuracy of predictions using new data. It will help us to refine predictions and determine if we can predict even farther into the future,” says Chakravarty. With more data, the scientists would be able to better identify those in the population at greatest risk for cognitive decline leading to Alzheimer’s.
According to the Alzheimer Society of Canada, 564,000 Canadians had Alzheimer’s or another form of dementia in 2016. The figure will rise to 937,000 within 15 years.
Worldwide, around 50million people have dementia and the total number is projected to reach 82million in 2030 and 152 in 2050, according to the World Health Organization. Alzheimer’s disease, the most common form of dementia, may contribute to 60-70% of cases. Presently, there is no truly effective treatment for this disease…. https://www.sciencedaily.com/releases/2018/10/181004155421.htm

Citation:

AI could predict cognitive decline leading to Alzheimer’s disease in the next five years
Algorithms may help doctors stream people onto prevention path sooner
Date: October 4, 2018
Source: McGill University
Summary:
A team of scientists has successfully trained a new artificial intelligence (AI) algorithm to make accurate predictions regarding cognitive decline leading to Alzheimer’s disease.

Journal Reference:
Nikhil Bhagwat, Joseph D. Viviano, Aristotle N. Voineskos, M. Mallar Chakravarty. Modeling and prediction of clinical symptom trajectories in Alzheimer’s disease using longitudinal data. PLOS Computational Biology, 2018; 14 (9): e1006376 DOI: 10.1371/journal.pcbi.1006376

Here is the press release from McGill University:

AI Could Predict Cognitive Decline Leading to Alzheimer’s Disease in the Next 5 Years
News
Algorithms may help doctors stream people onto prevention path sooner
PUBLISHED: 4OCT2018
A team of scientists has successfully trained a new artificial intelligence (AI) algorithm to make accurate predictions regarding cognitive decline leading to Alzheimer’s disease.
Dr. Mallar Chakravarty, a computational neuroscientist at the Douglas Mental Health University Institute, and his colleagues from the University of Toronto and the Centre for Addiction and Mental Health, designed an algorithm that learns signatures from magnetic resonance imaging (MRI), genetics, and clinical data. This specific algorithm can help predict whether an individual’s cognitive faculties are likely to deteriorate towards Alzheimer’s in the next five years.
“At the moment, there are limited ways to treat Alzheimer’s and the best evidence we have is for prevention. Our AI methodology could have significant implications as a ‘doctor’s assistant’ that would help stream people onto the right pathway for treatment. For example, one could even initiate lifestyle changes that may delay the beginning stages of Alzheimer’s or even prevent it altogether,” says Chakravarty, an Assistant Professor in McGill University’s Department of Psychiatry.
The findings, published in PLOS Computational Biology, used data from the Alzheimer’s Disease NeuroImaging Initiative. The researchers trained their algorithms using data from more than 800 people ranging from normal healthy seniors to those experiencing mild cognitive impairment, and Alzheimer’s disease patients. They replicated their results within the study on an independently collected sample from the Australian Imaging and Biomarkers Lifestyle Study of Ageing.
Can the predictions be improved with more data?
“We are currently working on testing the accuracy of predictions using new data. It will help us to refine predictions and determine if we can predict even farther into the future,” says Chakravarty. With more data, the scientists would be able to better identify those in the population at greatest risk for cognitive decline leading to Alzheimer’s.
According to the Alzheimer Society of Canada, 564,000 Canadians had Alzheimer’s or another form of dementia in 2016. The figure will rise to 937,000 within 15 years.
Worldwide, around 50million people have dementia and the total number is projected to reach 82million in 2030 and 152 in 2050, according to the World Health Organization. Alzheimer’s disease, the most common form of dementia, may contribute to 60–70% of cases. Presently, there is no truly effective treatment for this disease.

This work was funded by the Canadian Institutes of Health Research, the Natural Sciences andEngineering Research Council of Canada, the Fonds de recherche du Québec—Santé, Weston Brain Institute, Michael J. Fox Foundation for Parkinson’s Research, Alzheimer’s Society, Brain Canada, and the McGill University Healthy Brains for Healthy Lives – Canada First Research Excellence Fund.
The article “Modeling and prediction of clinical symptom trajectories in Alzheimer’s disease” was published in PLOS Computational Biology
For information and interviews
Bruno Geoffroy
Press Information Officer – Media Relations Office
CIUSSS de l’Ouest-de-l’Île-de-Montréal (Douglas Mental Health University Institute)
Tel.: 514-630-2225, ext. 5257 //relations.medias.comtl [at] ssss.gouv.qc.ca”>relations.medias.comtl@ssss.gouv.qc.ca

Alzheimer’s and Dementia Alliance of Wisconsin described why early detection is important:

Early diagnosis is key.
There are at least a dozen advantages to obtaining an early and accurate diagnosis when cognitive symptoms are first noticed.
1. Your symptoms might be reversible.
The symptoms you are concerned about might be caused by a condition that is reversible. And even if there is also an underlying dementia such as Alzheimer’s disease, diagnosis and treatment of reversible conditions can improve brain function and reduce symptoms.

2. It may be treatable.
Some causes of cognitive decline are not reversible, but might be treatable. Appropriate treatment can stop or slow the rate of further decline.
3. With treatments, the sooner the better.
Treatment of Alzheimer’s and other dementia-causing diseases is typically most effective when started early in the disease process. Once more effective treatments become available, obtaining an early and accurate diagnosis will be even more crucial.

4. Diagnoses are more accurate early in the disease process.
A more accurate diagnosis is possible when a complete history can be taken early in the disease process, while the person is still able to answer questions and report concerns and when observers can still recall the order in which symptoms first appeared. Obtaining an accurate diagnosis can be difficult once most of the brain has become affected.
5. It’s empowering.
An earlier diagnosis enables the person to participate in their own legal, financial, and long-term care planning and to make their wishes known to family members.
6. You can focus on what’s important to you.
It allows the person the opportunity to reprioritize how they spend their time – focusing on what matters most to them – perhaps completing life goals such as travel, recording family history, completing projects, or making memories with grandchildren while they still can.
7. You can make your best choices.
Early diagnosis can prevent unwise choices that might otherwise be made in ignorance – such as moving far away from family and friends, or making legal or financial commitments that will be hard to keep as the disease progresses.
8. You can use the resources available to you.
Individuals diagnosed early in the disease process can take advantage of early-stage support groups and learn tips and strategies to better manage and cope with the symptoms of the disease.
9. Participate or advocate for research.
Those diagnosed early can also take advantage of clinical trials – or advocate for more research and improved care and opportunities.
10. You can further people’s understanding of the disease.
Earlier diagnosis helps to reduce the stigma associated with the disease when we learn to associate the disease with people in the early stages, when they are still cogent and active in the community.
11. It will help your family.
An earlier diagnosis gives families more opportunity to learn about the disease, develop realistic expectations, and plan for their future together – which can result in reduced stress and feelings of burden and regret later in the disease process.
12. It will help you, too.
Early diagnosis allows the person and family to attribute cognitive changes to the disease rather than to personal failings – preserving the person’s ego throughout the disease process….                             https://alzwisc.org/Importance%20of%20an%20early%20diagnosis.htm

AI’s role in treatment of Alzheimer’s is an example of better living through technology.

Resources:
What Is Alzheimer’s?                                                                            https://www.alz.org/alzheimers-dementia/what-is-alzheimers

Understanding Alzheimer’s Disease: the Basics https://www.webmd.com/alzheimers/guide/understanding-alzheimers-disease-basics

What’s to know about Alzheimer’s disease? https://www.medicalnewstoday.com/articles/159442.php

Alzheimer’s Disease                                         https://www.cdc.gov/aging/aginginfo/alzheimers.htm

What is Artificial Intelligence? https://www.computerworld.com/article/2906336/emerging-technology/what-is-artificial-intelligence.html

Artificial Intelligence: What it is and why it matters https://www.sas.com/en_us/insights/analytics/what-is-artificial-intelligence.html
Brain                                                                                                            https://drwilda.com/tag/brain/

Where information leads to Hope. © Dr. Wilda.com

Dr. Wilda says this about that ©

Blogs by Dr. Wilda:

COMMENTS FROM AN OLD FART©
http://drwildaoldfart.wordpress.com/

Dr. Wilda Reviews ©
http://drwildareviews.wordpress.com/

Dr. Wilda ©
https://drwilda.com/