Victor Miller [Archival audio clip]: She’s asking what policies are most important to you, VIC?

VIC [Archival audio clip]: The most important policies to me focus on transparency, economic development, and innovation.

Leah Feiger: That is so bizarre. I got to ask, could VIC be exposed to other sources of information other than these public records? Say, email from a conspiracy theorist who wants VIC to do something not so good with elections that would not represent its constituents.

Vittoria Elliott: Great question. I asked Miller, “Hey, you’ve built this bot on top of ChatGPT. We know that sometimes there’s problems or biases in the data that go into training these models. Are you concerned that VIC could imbibe some of those biases or there could be problems?” He said, “No, I trust OpenAI. I believe in their product.” You’re right. He decided, because of what’s important to him as someone who cares a lot about Cheyenne’s governance, to feed this bot hundreds, and hundreds, and hundreds of pages of what are called supporting documents. The kind of documents that people will submit in a city council meeting. Whether that’s a complaint, or an email, or a zoning issue, or whatever. He fed that to VIC. But you’re right, these chatbots can be trained on other material. He said that he actually asked VIC, “What if someone tries to spam you? What if someone tries to trick you? Send you emails and stuff.” VIC apparently responded to him saying, “I’m pretty confident I could differentiate what’s an actual constituent concern and what’s spam, or what’s not real.”

Leah Feiger: I guess I would just say to that, one-third of Americans right now don’t believe that President Joe Biden legitimately won the 2020 election, but I’m so glad this robot is very, very confident in its ability to decipher dis and misinformation here.

Vittoria Elliott: Totally.

Leah Feiger: That was VIC in Wyoming. Tell us a little more about AI Steve in the UK. How is it different from VIC?

Vittoria Elliott: For one thing, AI Steve is actually the candidate.

Leah Feiger: What do you mean actually the candidate?

Vittoria Elliott: He’s on the ballot.

Leah Feiger: Oh, okay. There’s no meat puppet?

Vittoria Elliott: There is a meat puppet, and that Steve Endicott. He’s a Brighton based business man. He describes himself as being the person who will attend Parliament, do the human things.

Leah Feiger: Sure.

Vittoria Elliott: But people, when they go to vote next month in the UK, they actually have the ability not to vote for Steve Endicott, but to vote for AI Steve.

Leah Feiger: That’s incredible. Oh my God. How does that work?

Vittoria Elliott: The way they described it to me, Steve Endicott and Jeremy Smith, who is the developer of AI Steve, the way they’ve described this is as a big catchment for community feedback. On the backend, what happens is people can talk to or call into AI Steve, can have apparently 10,000 simultaneous conversations at any given point. They can say, “I want to know when trash collection is going to be different.” Or, “I’m upset about fiscal policy,” or whatever. Those conversations get transcribed by the AI and distilled into these are the policy positions that constituents care about. But to make sure that people aren’t spamming it basically and trying to trick it, what they’re going to do is they’re going to have what they call validators. Brighton is about an hour outside of London, a lot of people commute between the two cities. They’ve said, “What we want to do is we want to have people who are on their commute, we’re going to ask them to sign up to these emails to be validators.” They’ll go through and say, “These are the policies that people say that are important to AI Steve. Do you, regular person who’s actually commuting, find that to actually be valuable to you?” Anything that gets more than 50% interest, or approval, or whatever, that’s the stuff that real Steve, who will be in Parliament, will be voting on. They have this second level of checks to make sure that whatever people are saying as feedback to the AI is checked by real humans. They’re trying to make it a little harder for them to game the system.

Share.
Exit mobile version