Explain it to a 15-year-old smart student: using AI to study social behaviors
Hi, I’m Adithya, and welcome to weless. If you enjoy Tim Ferriss with an Islamic twist (sometimes not) + tech, you’ll probably like this newsletter.
The innovations in the AI field (LLMs specifically) are fast if you look into it. Hehe.
I recently read a paper about using AI models to study social behaviors.
Before diving in, here's a bit about why I find these papers so cool. I have this habit of downloading research papers and uploading them to ChatGPT to make sense of them. Because, come on, who reads these papers other than experts?
My prompt? 'Explain it to a smart 15-year-old student.' It turns out, breaking down hard concepts this way makes them much easier to grasp.
I’m going to start a series where I cover new AI papers and present them simply. Give me a like if you’re in! :)
Today’s paper is titled: Automated Social Science: Language Models as Scientists and Subjects.
What does social science mean?
It’s like looking at how people behave and interact in different situations. The aim is to understand why people do what they do.
So, imagine you have a hypothesis that a Muslim woman leading a large tech company would be more effective than a white Christian male. Instead of doing lots of work like surveys, you can let the AI do it for you.
How will the AI run the experiment?
The AI creates a virtual scenario with characters that act out your hypothesis. It changes things around, like the leader's background, and sees if the company does better or worse. Then it tells us what might happen in real life based on those simulations.
Where can I try this?
Currently, these experiments are mainly in research labs and universities. But some software companies are beginning to offer tools that let you simulate these scenarios on your computer.
So, what ideas will you test?
Thank you for reading.
Let’s keep the weless going.
If you enjoyed this edition, would you mind giving the heart below a click? If you didn’t enjoy it, tell me where I went wrong.