Listen to this story
|
Last week, Republican presidential candidate Vivek Ramaswamy delved into the intricate landscape of AI policy and said that human response to AI poses the greatest risk, emphasising the critical need for competence in leadership to navigate the challenges posed by this rapidly advancing AI landscape.
Citing Tennis, Ramaswamy said that his first job was as a ball boy in Cincinnati, and he was later promoted to a line judge in the ninth grade. That was the time he stumbled upon AI. “So, as a human line judge, you make the line calls. It’s not done that way anymore. It’s all done by AI,” he said, saying it predicts where the ball is going to land.
He said when humans made the decision, the players used to argue with the line judges. “Something funny happened when the AI started making the call. The first generation of the AI. It was so bad that you could literally see it with your eye that it was like a bad call. But the funny thing is that the players stopped arguing with the calls,” he said, stating that the biggest danger of AI is actually the human response to it.
“I don’t mean to get too philosophical, but I think it’s actually important,” he added.
Further, talking about regulating it at a policy level. He said it is important to draw a hard lines where AI powered algorithms should not be regularly interfacing broadly with kids.
“I think that we should not ban anything that China is also not willing to ban,” added Ramaswamy, saying that the companies should be given liability. He said that the companies are going to be liable for any unforeseen consequences of a protocol that they develop.
“At least makes them take the risks into account on the front end, which they are not doing today,” he said, saying that is the right answer as a matter of policy, giving example of ChatGPT and how it could go wrong.
AI expert Gary Marcus also retweeted the video, and opined saying that it was the “Weirdest take on AI I have ever seen, and I’ve seen some weird ones.” He also agreed with Ramaswamy on the potential dangers of AI, particularly in terms of societal manipulation and loss of control, that also expressed reservations about the emphasis on reviving faith and patriotism as primary solutions.
Despite differing on specific solutions, Marcus underscored the importance of addressing societal anxieties highlighted by Ramaswamy and encouraged evidence-based approaches to AI policy, advocating for decisions grounded in factual data and rigorous analysis rather than simplistic solutions based on faith or nationalism.