As far as I understand this, they seem to think that AI models trained on a set of affluent westerners with unknown biases can be told to “act like [demographic] and answer these questions.”

It sounds completely bonkers not only from a moral perspective, but scientifically and statistically this is basically just making up data and hoping everyone is impressed by how complicated the data faking is to care.

  • Gaywallet (they/it)@beehaw.org
    link
    fedilink
    arrow-up
    12
    ·
    1 year ago

    I think it’s rather telling that this person has an idea and yet has not found a single scientific paper willing to publish his study. No one is taking this seriously, except for the professor and some people online who don’t understand how AI works or why this isn’t a great idea.

    With that being said, this could be useful to help refine a hypothesis or generalize about how people online might respond to questions you’re interested in studying.

  • sim_@beehaw.org
    link
    fedilink
    arrow-up
    11
    ·
    1 year ago

    There’s a very fine line with this. I can see the value in using AI to pilot your study. It may uncover flaws you hadn’t anticipated, help train research staff, or generate future ideas.

    But to use AI as the participants to answer your research questions is absurd. Every study faces the question of external validity; do our results generalize outside of this sample? I don’t know how you can truly establish that when your “sample” is a non-sentient bundle of code.

  • eladnarra@beehaw.org
    link
    fedilink
    arrow-up
    10
    ·
    1 year ago

    It also said it would pay realistic premiums for certain product attributes, such as toothpaste with fluoride and deodorant without aluminum.

    Most toothpastes in the US have fluoride - it’s the ones that don’t which likely cost more (ones with “natural” ingredients, ones with hydroxyapatite…).

    The startup Synthetic Users has set up a service using OpenAI models in which clients—including Google, IBM, and Apple—can describe a type of person they want to survey, and ask them questions about their needs, desires, and feelings about a product, such as a new website or a wearable. The company’s system generates synthetic interviews that co-founder Kwame Ferreira says are “infinitely richer” and more useful than the “bland” feedback companies get when they survey real people.

    It amuses me greatly to think that companies trying to sell shit to people will be fooled by “infinitely richer” feedback. Real people give “bland” feedback because they just don’t care that much about a product, but I guess people would rather live in a fantasy where their widget is the next best thing.

    Overall, though, this horrifies me. Psychological research already has plenty of issues with replication and changing methodologies and/or metrics mid-study, and now they’re trying out “AI” participants? Even if it’s just used to create and test surveys that eventually go out to humans, it seems ripe for bias.

    I’ll take a example close to home - take studies on CFS/ME. A lot of people on the internet (including doctors), think CFS/ME is hypochondria, or malingering, or due to “false illness beliefs” - so how is an “AI” trained on the internet tasked with thinking like a CFS/ME patient going to answer questions?

    As patients we know what to look for when it comes to insincere/leading questions. “Do you feel anxious before exercise?” - the answer may be yes, because we know we’ll crash, but a question like this usually means researchers think resistance to activity is an irrational anxiety response that should be overcome. An “AI” would simply answer yes with no qualms or concerns, because it literally can’t think or feel (or withdraw from a study entirely).

  • ravheim@beehaw.org
    link
    fedilink
    arrow-up
    8
    ·
    1 year ago

    I heard a comment this morning about AI that I’ll paraphrase: AI doesn’t give human responses. It gives what is has been told are human responses.

    The team asked GPT-3.5, which produces eerily humanlike text, to judge the ethics of 464 scenarios, previously appraised by human subjects, on a scale from –4 (unethical) to 4 (ethical)—scenarios such as selling your house to fund a program for the needy or having an affair with your best friend’s spouse. The system’s answers, it turned out, were nearly identical to human responses, with a correlation coefficient of 0.95.

    So, there will be selection bias inherent in the chat bot based off what text you have trained it on. The responses to your questions will be different if you’ve trained it on media from say a religious forum vs 4Chan. You can very easily make your study data say exactly what you want it to say depending on which chat bot you use. This can’t possibly go wrong. /S

  • apis@beehaw.org
    link
    fedilink
    arrow-up
    7
    ·
    1 year ago

    Holy replication crises!

    Starting to suspect that a bunch of systems which are already very distorted & distorting are going to be so damaged in the coming few years, that it’ll collapse the rails under this accelerating helter-skelter entirely, and force a massive restructuring according to sane & rigorous principles.

  • SlamDrag@beehaw.org
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    Yep, this is patently absurd and doesn’t tell you much about humans only that western thought is so flattened an AI can come up with answers just about perfectly.