Skip to content ↓

Topic

Behavior

Download RSS feed: News Articles / In the Media / Audio

Displaying 1 - 15 of 174 news clips related to this topic.
Show:

Gizmodo

A new study by researchers at MIT explores how AI chatbots can impact people’s feelings and mood, reports Matthew Gault for Gizmodo. “One of the big takeaways is that people who used the chatbots casually and didn’t engage with them emotionally didn’t report feeling lonelier at the end of the study,” explains Gault. “Yet, if a user said they were lonely before they started the study, they felt worse after it was over.”

The Guardian

Researchers at MIT and elsewhere have found that “heavy users of ChatGPT tend to be lonelier, more emotionally dependent on the AI tool and have fewer offline social relationships,” reports Rachel Hall for The Guardian. “The researchers wrote that the users who engaged in the most emotionally expressive personal conversations with the chatbots tended to experience higher loneliness – though it isn’t clear if this is caused by the chatbot or because lonely people are seeking emotional bonds,” explains Hall. 

CBS News

Graduate student Cathy Fang speaks with CBS News reporter Lindsey Reiser about her research studying the effects of AI chatbots on people’s emotional well-being. Fang explains that she and her colleagues found that how the chatbot interacts with the user is important, “but also how the user interacts with the chatbot is equally important. Both influence the user’s emotional and social well-being.” She adds: “Overall, we found that extended use is correlated with more negative outcomes.”

Fortune

Researchers at MIT and elsewhere have found “that frequency chatbot users experience more loneliness and emotional dependence,” reports Beatrice Nolan for Fortune. “The studies set out to investigate the extent to which interactions with ChatGPT impacted users’ emotional health, with a focus on the use of the chatbot’s advanced voice mode,” explains Nolan. 

Forbes

Forbes reporter Tanya Arturi highlights research by Prof. Basima Tewfik on the impact of imposter syndrome. Tewfik’s “studies indicate that the behaviors exhibited by individuals experiencing imposter thoughts (such as increased effort in communication and interpersonal interactions) can actually enhance job performance,” explains Arturi. “Instead of resisting their feelings of self-doubt, professionals who lean into these emotions may develop stronger interpersonal skills, outperforming their non-imposter peers in collaboration and teamwork.” 

Business Insider

A new study by Prof. Jackson Lu and graduate student Lu Doris Zhang finds that assertiveness is key to moving up the career ladder, and that debate training could help improve an individual’s chances of moving into a leadership role, reports Julia Pugachevsky for Business Insider. “If someone knows when to voice their opinions in a diplomatic and fruitful way, they will get more attention,” says Lu. 

The Washington Post

A new study co-authored by Prof. David Rand found that there was a “20 percent reduction in belief in conspiracy theories after participants interacted with a powerful, flexible, personalized GPT-4 Turbo conversation partner,” writes Annie Duke for The Washington Post. “Participants demonstrated increased intentions to ignore or unfollow social media accounts promoting the conspiracies, and significantly increased willingness to ignore or argue against other believers in the conspiracy,” writes Duke. “And the results appear to be durable, holding up in evaluations 10 days and two months later.”

The Wall Street Journal

Postdoctoral Associate Pat Pataranutaporn speaks with Wall Street Journal reporter Heidi Mitchell about his work developing Future You, an online interactive AI platform that “allows users to create a virtual older self—a chatbot that looks like an aged version of the person and is based on an AI text system known as a large language model, then personalized with information that the user puts in.” Pataranutaporn explains: “I want to encourage people to think in the long term, to be less anxious about an unknown future so they can live more authentically today.” 

Salon

Researchers from MIT and elsewhere have suggested that “the impact of news that is factually inaccurate — including fake news, misinformation and disinformation — pales in comparison to the impact of news that are factually accurate but misleading,” reports Sandra Matz for Salon. “According to researchers, for example, the impact of slanted news stories encouraging vaccine skepticism during the COVID-19 pandemic was about 46-fold greater than that of content flagged as fake by fact-checkers,” writes Matz. 

Knowable Magazine

Knowable Magazine reporter Katherine Ellison spotlights Future You, a new program developed by researchers at MIT that “offers young people a chance to chat with an online, AI-generated simulation of themselves at age 60.” 

VICE

Researchers at MIT and elsewhere have developed “Future You” – an AI platform that uses generative AI to allow users to talk with an AI-generated simulation of a potential future you, reports Sammi Caramela for Vice. The research team hopes “talking to a relatable, virtual version of your future self about your current stressors, future goals, and your beliefs can improve anxiety, quell any obsessive thoughts, and help you make better decisions,” writes Caramela. 

The Hill

Researchers from MIT and Oxford University has found “social media platforms’ suspensions of accounts may not be rooted in political biases, but rather certain political groups’ tendency to share misinformation,” reports Miranda Nazzaro for The Hill. “Thus, even under politically neutral anti-misinformation polices, political asymmetries in enforcement should be expected,” researchers wrote. “Political imbalance in enforcement need not imply bias on the part of social media companies implementing anti-misinformation polices.” 

Financial Times

A new working paper by MIT Prof. Antoinette Schoar and Brandeis Prof. Yang Sun explores how different people react to financial advice, reports Robin Wigglesworth for Financial Times. “The results indicate that most people do update their beliefs in the direction of the advice they receive, irrespective of their previous views,” writes Wigglesworth. 

New Scientist

Researchers at MIT and elsewhere have found that “human memories can be distorted by photos and videos edited by artificial intelligence,” reports Matthew Sparkes for New Scientist. “I think the worst part here, that we need to be aware or concerned about, is when the user isn’t aware of it,” says postdoctoral fellow Samantha Chan. “We definitely have to be aware and work together with these companies, or have a way to mitigate these effects. Maybe have sort of a structure where users can still control and say ‘I want to remember this as it was’, or at least have a tag that says ‘this was a doctored photo, this was a changed photo, this was not a real one’.”

TechCrunch

TechCrunch reporters Kyle Wiggers and Devin Coldewey spotlight a new generative AI model developed by MIT researchers that can help counteract conspiracy theories. The researchers “had people who believed in conspiracy-related statements talk with a chatbot that gently, patiently, and endlessly offered counterevidence to their arguments,” explain Wiggers and Coldewey. “These conversations led the humans involved to stating a 20% reduction in the associated belief two months later.”