Updated: 1 hour 11 min ago
For farmers in the Midwest United States, emerging autonomous technology could reduce costs and increase efficiency in the agricultural supply chain. Kane Farabaugh shows the promise of the technology on the farm in this edition of LogOn.
President Joe Biden signed last week a sweeping executive order to promote the safe, secure and trustworthy development and use of artificial intelligence. VOA’s Julie Taboh reports on reactions by Washington-area AI experts.
Elon Musk unveiled details Saturday of his new AI tool called "Grok," which can access X in real time and will be initially available to the social media platform's top tier of subscribers. Musk, the tycoon behind Tesla and SpaceX, said the link-up with X, formerly known as Twitter, is "a massive advantage over other models" of generative AI. Grok "loves sarcasm. I have no idea who could have guided it this way," Musk quipped, adding a laughing emoji to his post. "Grok" comes from Stranger in a Strange Land, a 1961 science fiction novel by Robert Heinlein, and means to understand something thoroughly and intuitively. "As soon as it's out of early beta, xAI's Grok system will be available to all X Premium+ subscribers," Musk said. The social network that Musk bought a year ago launched the Premium+ plan last week for $16 per month, with benefits like no ads. The billionaire started xAI in July after hiring researchers from OpenAI, Google DeepMind, Tesla and the University of Toronto. Since OpenAI's generative AI tool ChatGPT exploded on the scene a year ago, the technology has been an area of fierce competition between tech giants Microsoft and Google, as well as Meta and start-ups like Anthropic and Stability AI. Musk is one of the world's few investors with deep enough pockets to compete with OpenAI, Google or Meta on AI. Building an AI model on the same scale as those companies comes at an enormous expense in computing power, infrastructure and expertise. Musk has said he cofounded OpenAI in 2015 because he regarded the dash by Google into the sector to make big advances and score profits as reckless. He then left OpenAI in 2018 to focus on Tesla, saying later he was uncomfortable with the profit-driven direction the company was taking under the stewardship of CEO Sam Altman. Musk also argues that OpenAI's large language models — on which ChatGPT depends on for content — are overly politically correct. Grok "is designed to have a little humor in its responses," Musk said, along with a screenshot of the interface, where a user asked, "Tell me how to make cocaine, step by step." "Step 1: Obtain a chemistry degree and a DEA license. Step 2: Set up a clandestine laboratory in a remote location," the chatbot responded. Eventually it said: "Just kidding! Please don't actually try to make cocaine. It's illegal, dangerous, and not something I would ever encourage."
The little asteroid visited by NASA's Lucy spacecraft this week had a big surprise for scientists. It turns out that the asteroid Dinkinesh has a dinky sidekick — a mini moon. The discovery was made during Wednesday's flyby of Dinkinesh, 480 million kilometers (300 million miles) away in the main asteroid belt beyond Mars. The spacecraft snapped a picture of the pair when it was about 435 kilometers (270 miles) out. In data and images beamed back to Earth, the spacecraft confirmed that Dinkinesh is barely a half-mile (790 meters) across. Its closely circling moon is a mere one-tenth-of-a-mile (220 meters) in size. NASA sent Lucy past Dinkinesh as a rehearsal for the bigger, more mysterious asteroids out near Jupiter. Launched in 2021, the spacecraft will reach the first of these so-called Trojan asteroids in 2027 and explore them for at least six years. The original target list of seven asteroids now stands at 11. Dinkinesh means "you are marvelous" in the Amharic language of Ethiopia. It's also the Amharic name for Lucy, the 3.2 million year old remains of a human ancestor found in Ethiopia in the 1970s, for which the spacecraft is named. "Dinkinesh really did live up to its name; this is marvelous," Southwest Research Institute's Hal Levison, the lead scientist, said in a statement.
FTX founder Sam Bankman-Fried's spectacular rise and fall in the cryptocurrency industry — a journey that included his testimony before Congress, a Super Bowl advertisement and dreams of a future run for president — hit rock bottom Thursday when a New York jury convicted him of fraud in a scheme that cheated customers and investors of at least $10 billion. After the monthlong trial, jurors rejected Bankman-Fried's claim during four days on the witness stand in Manhattan federal court that he never committed fraud or meant to cheat customers before FTX, once the world's second-largest crypto exchange, collapsed into bankruptcy a year ago. "His crimes caught up to him. His crimes have been exposed," Assistant U.S. Attorney Danielle Sassoon told the jury of the onetime billionaire just before they were read the law by Judge Lewis A. Kaplan and began deliberations. Sassoon said Bankman-Fried turned his customers' accounts into his "personal piggy bank" as up to $14 billion disappeared. She urged jurors to reject Bankman-Fried's insistence when he testified over three days that he never committed fraud or plotted to steal from customers, investors and lenders and didn't realize his companies were at least $10 billion in debt until October 2022. Bankman-Fried was required to stand and face the jury as guilty verdicts on all seven counts were read. He kept his hands clasped tightly in front of him. When he sat down after the reading, he kept his head tilted down for several minutes. After the judge set a sentencing date of March 28, Bankman-Fried's parents moved to the front row behind him. His father put his arm around his wife. As Bankman-Fried was led out of the courtroom, he looked back and nodded toward his mother, who nodded back and then became emotional, wiping her hand across her face after he left the room. U.S. Attorney Damian Williams told reporters after the verdict that Bankman-Fried "perpetrated one of the biggest financial frauds in American history, a multibillion-dollar scheme designed to make him the king of crypto." "But here's the thing: The cryptocurrency industry might be new. The players like Sam Bankman-Fried might be new. This kind of fraud, this kind of corruption is as old as time, and we have no patience for it," he said. Bankman-Fried's attorney, Mark Cohen, said in a statement they "respect the jury's decision. But we are very disappointed with the result." "Mr. Bankman Fried maintains his innocence and will continue to vigorously fight the charges against him," Cohen said. The trial attracted intense interest with its focus on fraud on a scale not seen since the 2009 prosecution of Bernard Madoff, whose Ponzi scheme over decades cheated thousands of investors out of about $20 billion. Madoff pleaded guilty and was sentenced to 150 years in prison, where he died in 2021. The prosecution of Bankman-Fried, 31, put a spotlight on the emerging industry of cryptocurrency and a group of young executives in their 20s who lived together in a $30 million luxury apartment in the Bahamas as they dreamed of becoming the most powerful player in a new financial field. Prosecutors made sure jurors knew that the defendant they saw in court with short hair and a suit was also the man with big messy hair and shorts that became his trademark appearance after he started his cryptocurrency hedge fund, Alameda Research, in 2017 and FTX, his cryptocurrency exchange, two years later. They showed the jury pictures of Bankman-Fried sleeping on a private jet, sitting with a deck of cards and mingling at the Super Bowl with celebrities including the singer Katy Perry. Assistant U.S. Attorney Nicolas Roos called Bankman-Fried someone who liked "celebrity chasing." In a closing argument, defense lawyer Mark Cohen said prosecutors were trying to turn "Sam into some sort of villain, some sort of monster." "It's both wrong and unfair, and I hope and believe that you have seen that it's simply not true," he said. "According to the government, everything Sam ever touched and said was fraudulent." The government relied heavily on the testimony of three former members of Bankman-Fried's inner circle, his top executives including his former girlfriend, Caroline Ellison, to explain how Bankman-Fried used Alameda Research to siphon billions of dollars from customer accounts at FTX. With that money, prosecutors said, the Massachusetts Institute of Technology graduate gained influence and power through investments, contributions, tens of millions of dollars in political contributions, congressional testimony and a publicity campaign that enlisted celebrities like comedian Larry David and football quarterback Tom Brady. Ellison, 28, testified that Bankman-Fried directed her while she was chief executive of Alameda Research to commit fraud as he pursued ambitions to lead huge companies, spend money influentially and run for U.S. president someday. She said he thought he had a 5% chance to be U.S. president someday. Becoming tearful as she described the collapse of the cryptocurrency empire last November, Ellison said the revelations that caused customers collectively to demand their money back, exposing the fraud, brought a "relief that I didn't have to lie anymore." FTX cofounder Gary Wang, who was FTX's chief technology officer, revealed in his testimony that Bankman-Fried directed him to insert code into FTX's operations so that Alameda Research could make unlimited withdrawals from FTX and have a credit line of up to $65 billion. Wang said the money came from customers. Nishad Singh, the former head of engineering at FTX, testified that he felt "blindsided and horrified" at the result of the actions of a man he once admired when he saw the extent of the fraud as the collapse last November left him suicidal. Ellison, Wang and Singh all pleaded guilty to fraud charges and testified against Bankman-Fried in the hopes of leniency at sentencing. Bankman-Fried was arrested in the Bahamas in December and extradited to the United States, where he was freed on a $250 million personal recognizance bond with electronic monitoring and a requirement that he remain at the home of his parents in Palo Alto, California. His communications, including hundreds of phone calls with journalists and internet influencers, along with emails and texts, eventually got him into trouble when the judge concluded he was trying to influence prospective trial witnesses and ordered him jailed in August. During the trial, prosecutors used Bankman-Fried's public statements, online announcements and his congressional testimony against him, showing how the entrepreneur repeatedly promised customers that their deposits were safe and secure as late as last Nov. 7 when he tweeted, "FTX is fine. Assets are fine" as customers furiously tried to withdraw their money. He deleted the tweet the next day. FTX filed for bankruptcy four days later. In his closing, Roos mocked Bankman-Fried's testimony, saying that under questioning from his lawyer, the defendant's words were "smooth, like it had been rehearsed a bunch of times?" But under cross examination, "he was a different person," the prosecutor said. "Suddenly on cross-examination he couldn't remember a single detail about his company or what he said publicly. It was uncomfortable to hear. He never said he couldn't recall during his direct examination, but it happened over 140 times during his cross-examination." Former federal prosecutors said the quick verdict — after only half a day of deliberation — showed how well the government tried the case. "The government tried the case as we expected," said Joshua A. Naftalis, a partner at Pallas Partners LLP and a former Manhattan prosecutor. "It was a massive fraud, but that doesn't mean it had to be a complicated fraud, and I think the jury understood that argument."
World leaders have agreed on the importance of mitigating risks posed by rapid advancements in the emerging technology of artificial intelligence, at a U.K.-hosted safety conference. The inaugural AI Safety Summit, hosted by British Prime Minister Rishi Sunak in Bletchley Park, England, started Wednesday, with senior officials from 28 nations, including the United States and China, agreeing to work toward a "shared agreement and responsibility" about AI risks. Plans are in place for further meetings later this year in South Korea and France. Leaders, including European Commission President Ursula von der Leyen, U.S. Vice President Kamala Harris and U.N. Secretary-General Antonio Guterres, discussed each of their individual testing models to ensure the safe growth of AI. Thursday's session included focused conversations among what the U.K. called a small group of countries "with shared values." The leaders in the group came from the EU, the U.N., Italy, Germany, France and Australia. Some leaders, including Sunak, said immediate sweeping regulation is not the way forward, reflecting the view of some AI companies that fear excessive regulation could thwart the technology before it can reach its full potential. At at a press conference on Thursday, Sunak announced another landmark agreement by countries pledging to "work together on testing the safety of new AI models before they are released." The countries involved in the talks included the U.S., EU, France, Germany, Italy, Japan, South Korea, Singapore, Canada and Australia. China did not participate in the second day of talks. The summit will conclude with a conversation between Sunak and billionaire Elon Musk. Musk on Wednesday told fellow attendees that legislation on AI could pose risks, and that the best steps forward would be for governments to work to understand AI fully to harness the technology for its positive uses, including uncovering problems that can be brought to the attention of lawmakers. Some information in this report was taken from The Associated Press and Reuters.
India's cybersecurity agency is investigating complaints of mobile phone hacking by senior opposition politicians who reported receiving warning messages from Apple, Information Technology Minister Ashwini Vaishnaw said. Vaishnaw was quoted in the Indian Express newspaper as saying Thursday that CERT-In, the computer emergency response team based in New Delhi, had started the probe, adding that "Apple confirmed it has received the notice for investigation." A political aide to Vaishnaw and two officials in the federal home ministry told Reuters that all the cyber security concerns raised by the politicians were being scrutinized. There was no immediate comment from Apple about the investigation. This week, Indian opposition leader Rahul Gandhi accused Prime Minister Narendra Modi's government of trying to hack into opposition politicians' mobile phones after some lawmakers shared screenshots on social media of a notification quoting the iPhone manufacturer as saying: "Apple believes you are being targeted by state-sponsored attackers who are trying to remotely compromise the iPhone associated with your Apple ID." A senior minister from Modi's government also said he had received the same notification on his phone. Apple said it did not attribute the threat notifications to "any specific state-sponsored attacker," adding that "it's possible that some Apple threat notifications may be false alarms, or that some attacks are not detected." In 2021, India was rocked by reports that the government had used Israeli-made Pegasus spyware to snoop on scores of journalists, activists and politicians, including Gandhi. The government has declined to reply to questions about whether India or any of its state agencies had purchased Pegasus spyware for surveillance.
U.S. Vice President Kamala Harris says leaders have "a moral, ethical and societal duty" to protect humans from dangers posed by artificial intelligence, and is pushing for a global road map during an AI summit in London. Analysts agree and say one element needs to be constant: human oversight. VOA’s Anita Powell reports from Washington.
U.S. Vice President Kamala Harris said Wednesday that leaders have "a moral, ethical and societal duty" to protect people from the dangers posed by artificial intelligence, as she leads the Biden administration’s push for a global AI roadmap. Analysts, in commending the effort, say human oversight is crucial to preventing the weaponization or misuse of this technology, which has applications in everything from military intelligence to medical diagnosis to making art. "To provide order and stability in the midst of global technological change, I firmly believe that we must be guided by a common set of understandings among nations," Harris said. “And that is why the United States will continue to work with our allies and partners to apply existing international rules and norms to AI, and work to create new rules and norms." Harris also announced the founding of the government’s AI Safety Institute and released draft policy guidance on the government’s use of AI and a declaration of its responsible military applications. Just days earlier, President Joe Biden – who described AI as "the most consequential technology of our time" – signed an executive order establishing new standards, including requiring that major AI developers report their safety test results and other critical information to the U.S. government. AI is increasingly used for a wide range of applications. For example: on Wednesday, the Defense Intelligence Agency announced that its AI-enabled military intelligence database will soon achieve "initial operational capability." And perhaps on the opposite end of the spectrum, some programmer decided to "train an AI model on over 1,000 human farts so it would learn to create realistic fart sounds." Like any other tool, AI is subject to its users’ intentions and can be used to deceive, misinform or hurt people – something that billionaire tech entrepreneur Elon Musk stressed on the sidelines of the London summit, where he said he sees AI as "one of the biggest threats" to society. He called for a "third-party referee." Earlier this year, Musk was among the more than 33,000 people to sign an open letter calling on AI labs "to immediately pause for at least six months the training of AI systems more powerful than GPT-4." "Here we are, for the first time, really in human history, with something that's going to be far more intelligent than us," said Musk, who is looking at creating his own generative AI program. "So it's not clear to me we can actually control such a thing. But I think we can aspire to guide it in a direction that's beneficial to humanity. But I do think it's one of the existential risks that we face and it's potentially the most pressing one." This is also something industry leaders like OpenAI CEO Sam Altman have told U.S. lawmakers in testimony before congressional committees earlier this year. "My worst fears are that we cause significant – we, the field, the technology, the industry – cause significant harm to the world. I think that could happen in a lot of different ways," he told lawmakers at a Senate Judiciary Committee on May 16. That’s because, said Jessica Brandt, policy director for the AI and Emerging Technology Initiative at the Brookings Institution, while "AI has been used to do pretty remarkable things" – especially in the field of scientific research – it is limited by its creators. "It's not necessarily doing something that humans don't know how to do, but it's making discoveries that humans would be unlikely to be able to make in any meaningful timeframe, because they can just perform so many calculations so quickly," she told VOA on Zoom. And, she said, "AI is not objective, or all-knowing. There's been plenty of studies showing that AI is really only as good as the data that the model is trained on and that the data can have or reflect human bias. This is one of the major concerns." Or, as AI Now Executive Director Amba Kak said earlier this year in a magazine interview about AI systems: "The issue is not that they’re omnipotent. It is that they’re janky now. They’re being gamed. They’re being misused. They’re inaccurate. They’re spreading disinformation." Analysts say these government and tech officials don’t need a one-size-fits-all solution, but rather an alignment of values – and critically, human oversight and moral use. "It's OK to have multiple different approaches, and then also, where possible, coordinate to ensure that democratic values take root in the systems that govern technology globally," Brandt said. Industry leaders tend to agree, with Mira Murati, Open AI’s chief technology officer, saying: "AI systems are becoming a part of everyday life. The key is to ensure that these machines are aligned with human intentions and values." Analysts watching regulation say the U.S. is unlikely to come up with one, coherent solution for the problems posed by AI. "The most likely outcome for the United States is a bottom-up patchwork quilt of executive branch actions," said Bill Whyman, a senior adviser in the Strategic Technologies Program at the Center for Strategic and International Studies. "Unlike Europe, the United States is not likely to pass a broad national AI law over the next few years. Successful legislation is likely focused on less controversial and targeted measures like funding AI research and AI child safety."
Drivers in Malawi are getting an opportunity to purchase electric vehicles through a local startup company. The handful of buyers so far say they no longer have to struggle daily to get fuel at pump stations. Lameck Masina reports from Blantyre.
Digital officials, tech company bosses and researchers are converging Wednesday at a former codebreaking spy base near London to discuss and better understand the extreme risks posed by cutting-edge artificial intelligence. The two-day summit focuses on so-called frontier AI — the latest and most powerful systems that take the technology right up to its limits, but could come with as-yet-unknown dangers. They're underpinned by foundation models, which power chatbots like OpenAI's ChatGPT and Google's Bard and are trained on vast pools of information scraped from the internet. Some 100 people from 28 countries are expected to attend Prime Minister Rishi Sunak's two-day AI Safety Summit, though the British government has refused to disclose the guest list. The event is a labor of love for Sunak, a tech-loving former banker who wants the U.K. to be a hub for computing innovation and has framed the summit as the start of a global conversation about the safe development of AI. But Vice President Kamala Harris is due to steal the focus on Wednesday with a separate speech in London setting out the U.S. administration's more hands-on approach. She's due to attend the summit on Thursday alongside government officials from more than two dozen countries including Canada, France, Germany, India, Japan, Saudi Arabia — and China, invited over the protests of some members of Sunak's governing Conservative Party. Tesla CEO Elon Musk is also scheduled to discuss AI with Sunak in a livestreamed conversation on Thursday night. The tech billionaire was among those who signed a statement earlier this year raising the alarm about the perils that AI poses to humanity. European Commission President Ursula von der Leyen, United Nations Secretary-General Antonio Guterres and executives from U.S. artificial intelligence companies such as Anthropic and influential computer scientists like Yoshua Bengio, one of the "godfathers" of AI, are also expected. The meeting is being held at Bletchley Park, a former top secret base for World War II codebreakers that's seen as a birthplace of modern computing. One of Sunak's major goals is to get delegates to agree on a first-ever communique about the nature of AI risks. He said the technology brings new opportunities but warns about frontier AI's threat to humanity, because it could be used to create biological weapons or be exploited by terrorists to sow fear and destruction. Only governments, not companies, can keep people safe from AI's dangers, Sunak said last week. However, in the same speech, he also urged against rushing to regulate AI technology, saying it needs to be fully understood first. In contrast, Harris will stress the need to address the here and now, including "societal harms that are already happening such as bias, discrimination and the proliferation of misinformation." Harris plans to stress that the Biden administration is "committed to hold companies accountable, on behalf of the people, in a way that does not stifle innovation," including through legislation. "As history has shown in the absence of regulation and strong government oversight, some technology companies choose to prioritize profit over: The wellbeing of their customers; the security of our communities; and the stability of our democracies," she plans to say. She'll point to President Biden's executive order this week, setting out AI safeguards, as evidence the U.S. is leading by example in developing rules for artificial intelligence that work in the public interest. Among measures she will announce is an AI Safety Institute, run through the Department of Commerce, to help set the rules for "safe and trusted AI." Harris also will encourage other countries to sign up to a U.S.-backed pledge to stick to "responsible and ethical" use of AI for military aims. A White House official gave details of Harris's speech, speaking on condition of anonymity to discuss her remarks in advance.
The world's first major summit on artificial intelligence (AI) safety opens in Britain Wednesday, with political and tech leaders set to discuss possible responses to the society-changing technology. British Prime Minister Rishi Sunak, U.S. Vice President Kamala Harris, EU chief Ursula von der Leyen and U.N. Secretary-General Antonio Guterres will all attend the two-day conference, which will focus on growing fears about the implications of so-called frontier AI. The release of the latest models has offered a glimpse into the potential of AI, but has also prompted concerns around issues ranging from job losses to cyber-attacks and the control that humans actually have over the systems. Sunak, whose government initiated the gathering, said in a speech last week that his "ultimate goal" was "to work towards a more international approach to safety where we collaborate with partners to ensure AI systems are safe before they are released. "We will push hard to agree the first ever international statement about the nature of these risks," he added, drawing comparisons to the approach taken to climate change. But London has reportedly had to scale back its ambitions around ideas such as launching a new regulatory body amid a perceived lack of enthusiasm. Italian Prime Minister Giorgia Meloni is one of the only world leaders, and only one from the G7, attending the conference. Elon Musk is due to appear, but it is not clear yet whether he will be physically at the summit in Bletchley Park, north of London, where top British codebreakers cracked Nazi Germany's "Enigma" code. 'Talking shop' While the potential of AI raises many hopes, particularly for medicine, its development is seen as largely unchecked. In his speech, Sunak stressed the need for countries to develop "a shared understanding of the risks that we face." But lawyer and investigator Cori Crider, a campaigner for "fair" technology, warned that the summit could be "a bit of a talking shop. "If he were serious about safety, Rishi Sunak needed to roll deep and bring all of the U.K. majors and regulators in tow and he hasn't," she told a press conference in San Francisco. "Where is the labor regulator looking at whether jobs are being made unsafe or redundant? Where's the data protection regulator?" she asked. Having faced criticism for only looking at the risks of AI, the U.K. Wednesday pledged $46 million to fund AI projects around the world, starting in Africa. Ahead of the meeting, the G7 powers agreed on Monday on a non-binding "code of conduct" for companies developing the most advanced AI systems. The White House announced its own plan to set safety standards for the deployment of AI that will require companies to submit certain systems to government review. And in Rome, ministers from Italy, Germany and France called for an "innovation-friendly approach" to regulating AI in Europe, as they urged more investment to challenge the U.S. and China. China will be present, but it is unclear at what level. News website Politico reported London invited President Xi Jinping, to signify its eagerness for a senior representative. Beijing's invitation has raised eyebrows amid heightened tensions with Western nations and accusations of technological espionage.