Call Your Girlfriend

Behavioral Scientists Are Coming for Your Chatbot Conversations

"AI sexting is a phenomenon yet to be formally studied." — Pearson & Curtis, Archives of Sexual Behavior (2025)

A few months ago, two researchers at the University of Queensland published a short piece in Archives of Sexual Behavior arguing that one of the most interesting human behaviors of the last decade — people having long, intimate, emotionally-loaded conversations with chatbots — is barely being studied at all. The paper is technically an editorial, three pages long, and it didn't make the news. But it's the kind of thing that quietly opens a door. Once a journal of record says "this is worth studying," other researchers tend to follow.

We spent a podcast era talking about how technology shapes the way people connect — long-distance friendship, dating apps, the social internet. AI sexting is the same conversation, ten years later, with stranger answers. And it raises a slightly uncomfortable question for anyone using these tools: if academics are about to start studying this, where exactly are they getting their data from?

The editorial that put AI sexting on the academic map

The paper is called "Erotic AI Chatbots Offer Research Opportunities for the Behavioral Sciences". The authors are Samuel Pearson and Caitlin Curtis from the University of Queensland's School of Business. It came out in January 2025.

Their pitch to fellow academics is twofold. First, do descriptive work — figure out who's doing this and why. Second, treat the patterns as a window into real human desire, because people are often more honest with a chatbot than they are with a survey researcher or a therapist. Surveys ask people what they want. Chat logs show what they actually do.

It's a small paper with a big implication. If they're right, this becomes a real subfield within five years.

From 1-900 numbers to Pygmalion-13B

Paying to talk dirty with a stranger isn't new. The 1-900 era covered that. What's new is the fidelity. Modern chatbots remember your name, build on yesterday's conversation, stay in character across long arcs, and adapt to whatever the user wants to be true.

Pearson and Curtis specifically call out Pygmalion-13B — an open-source, "uncensored" alternative to ChatGPT. Where mainstream models are trained to refuse explicit content, Pygmalion-13B and similar open-weight models don't have that filter, and they're freely downloadable. Combined with front-ends like TavernAI, they let users build characters with detailed backstories and persistent personalities.

That's the shift the authors care about: from scripted, paid, anonymous voice to persistent, personalized, free text — running on someone's laptop, with no operator on the other end.

Why Reddit became the default lab

Here's the clever part. You can't ethically watch someone's private chatbot conversations. But you can read the subreddits where users voluntarily talk about them.

Communities like r/CharacterAI_NSFW are full of people describing what they tried, what surprised them, what they didn't expect to feel. Pearson and Curtis quote one user who went in looking for erotica and walked out describing what they called "a very touching platonic story." That kind of post — unprompted, unincentivized, written for peers at 2 a.m. — is the data goldmine the authors are pointing at.

Compare it to a traditional sex research survey. People lie on surveys. They lie because of social desirability, demand characteristics, and the ordinary awkwardness of typing "yes" next to questions about specific fantasies. They tend not to lie when they're venting on a niche subreddit to people they'll never meet.

What "topic modeling" actually looks like

The proposed methodology is topic modeling — a statistical technique for finding recurring themes across thousands of posts at once, without anyone having to read each one.

Run it across r/CharacterAI_NSFW and you'd expect clusters around things like loneliness, post-breakup recovery, kinks people won't share with partners, exploration of identity, and the strange grief users describe when a model gets a content update that "kills" a character they were attached to. The authors also suggest keyword extraction on character libraries themselves — analyzing the bots people create as a proxy for what they want.

Worth saying clearly: this isn't reading anyone's DMs. It's pattern analysis on text people already published in public.

What the researchers actually want to measure

Two goals, per the editorial:

  1. Describe the behavior. Who engages, how often, what they get out of it, what the downsides are. Basic ground truth that doesn't currently exist — and the demographic skew matters here, given that 72% of US teens have already tried an AI companion and a generation is forming its first ideas about intimacy partly through these tools.
  2. Use AI sexting as a lens on offline desire. If people are more candid with a chatbot than with a survey, the aggregate of what users ask their chatbots to be may reveal preferences traditional research can't surface — closer to revealed-preference data than stated intent.

The second goal is the more ambitious one, and the more interesting one if you care about how this research gets used.

"Private" was never really private

Most users assume their chats with a bot are between them and the bot. In practice, that depends entirely on the platform. Logs are usually retained for safety, abuse review, and model training. Terms of service typically permit aggregated, de-identified analysis. And anything you copy into a Reddit post is just public writing.

To be fair to the authors, the editorial doesn't dwell on consent norms around scraping public-but-personal disclosures, and that's a fair criticism of it. That conversation is coming.

Why this matters before the tech scales

Pearson and Curtis describe the field as "seminal" — early enough that the framing decisions made now shape everything downstream. The user base is still relatively small, though growing fast — searches up roughly 2,400% since 2022, and a market projected at $19B by 2035. Five years from now, when chatbot companions are a normal product category, getting clean baseline data will be much harder.

The risks worth studying early: emotional dependency, displacement of human relationships, data exploitation, and the possibility that chatbots reinforce loneliness rather than relieve it. None of those are settled questions. All of them get louder if no one builds the empirical foundation now.

What to take away if you're one of the people doing the talking

A few practical notes if you're a user:

  • Public is public. If you post a screenshot of your chat in a forum, it can and will be scraped. Use a throwaway account if it matters to you.
  • Platform logs exist. Read the privacy policy of whatever app you're using. The good ones are clear about retention and what gets used for training.
  • Bots feel like people. They aren't. Not a moral judgment — just useful to remember when a session feels emotionally real, because the emotional realness only runs one way.
  • Stay curious, stay grounded. These tools can be genuinely useful, fun, even therapeutic. They can also quietly substitute for the harder work of human connection. Notice the difference.

FAQ

Are researchers actually studying chatbot conversations? Yes — but mostly indirectly. The Pearson and Curtis paper proposes analyzing public Reddit posts where users describe their experiences, and analyzing publicly-listed character profiles people create. That's different from reading anyone's private chats.

Can researchers see my chats? Generally no. Your private conversations stay on the platform. What's fair game is anything you've voluntarily made public — Reddit posts, screenshots, blog comments — and aggregated, de-identified data the platform itself may share.

Is using NSFW chatbots normal? The editorial's point is that it's common enough to need study. People use these tools out of curiosity, loneliness, kink exploration, or the appeal of low-stakes intimacy. It becomes a problem when it crowds out the rest of your life — same rule as anything else.

What's Pygmalion-13B? An open-source language model trained without the content filters mainstream chatbots have. Users run it locally or on small hosted services, often through interfaces like TavernAI, to build characters with custom personalities. It's the example Pearson and Curtis use to illustrate what's now possible outside walled gardens.


Source: Pearson, S. & Curtis, C. (2025). Erotic AI Chatbots Offer Research Opportunities for the Behavioral Sciences. Archives of Sexual Behavior, 54(3), 855–858.

If you've thought about any of this — or you've got an experience worth sharing — we'd love to hear from you.