Updated: 25 min 40 sec ago
Paris — Even as human-caused climate change threatens the environment, nature continues to inspire our technological advancement. "The solutions that are provided by nature have evolved for billions of years and tested repeatedly every day since the beginning of time," said Evripidis Gkanias, a University of Edinburgh researcher. Gkanias has a special interest in how nature can educate artificial intelligence. "Human creativity might be fascinating, but it cannot reach nature's robustness — and engineers know that," he told AFP. From compasses mimicking insect eyes to forest fire-fighting robots that behave like vines, here's a selection of this year's nature-based technology. Insect compass Some insects — such as ants and bees — navigate visually based on the intensity and polarisation of sunlight, thus using the sun's position as a reference point. Researchers replicated their eye structure to construct a compass capable of estimating the sun's location in the sky, even on cloudy days. Common compasses rely on Earth's weak magnetic field to navigate, which is easily disturbed by noise from electronics. A prototype of the light-detecting compass is "already working great," said Gkanias, who led the study published in Communications Engineering. "With the appropriate funding, this could easily be transformed into a more compact and lightweight product" freely available, he added. And with a little further tweaking, the insect compass could work on any planet where a big celestial light source is visible. Water-collecting webs Fabric inspired by the silky threads of a spider web and capable of collecting drinking water from morning mist could soon play an important role in regions suffering water scarcity. The artificial threads draw from the feather-legged spider, whose intricate "spindle-knots" allow large water droplets to move and collect on its web. Once the material can be mass produced, the water harvested could reach a "considerable scale for real application", Yongmei Zheng, a co-author of the study published in Advanced Functional Materials, told AFP. Fire-fighting vines Animals aren't the only source of inspiration from nature. Scientists have created an inflatable robot that "grows" in the direction of light or heat, in the same way vines creep up a wall or across a forest floor. The roughly two-meter-long tubular robot can steer itself using fluid-filled pouches rather than costly electronics. In time, these robots could find hot spots and deliver fire suppression agents, say researchers at the University of California, Santa Barbara. "These robots are slow, but that is OK for fighting smoldering fires, such as peat fires, which can be a major source of carbon emissions," co-author Charles Xiao told AFP. But before the robots can climb the terrain, they need to be more heat-resistant and agile. Kombucha circuits Scientists at the Unconventional Computing Laboratory at the University of the West of England in Bristol have found a way to use slimy kombucha mats — produced by yeast and bacteria during the fermenting of the popular tea-based drink — to create "kombucha electronics." The scientists printed electrical circuits onto dried mats that were capable of illuminating small LED lights. Dry kombucha mats share properties of textiles or even leather. But they are sustainable and biodegradable, and can even be immersed in water for days without being destroyed, said the authors. "Kombucha wearables could potentially incorporate sensors and electronics within the material itself, providing a seamless and unobtrusive integration of technology with the human body," such as for heart monitors or step-trackers, lead author Andrew Adamatzky and the laboratory's director, told AFP. The mats are lighter, cheaper and more flexible than plastic, but the authors caution that durability and mass production remain significant obstacles. Scaly robots Pangolins resemble a cross between a pine cone and an anteater. The soft-bodied mammals, covered in reptilian scales, are known to curl up in a ball to protect themselves against predators. Now, a tiny robot might adapt that same design for potentially life-saving work, according to a study published in Nature Communications. It is intended to roll through our digestive tracts before unfurling and delivering medicine or stopping internal bleeding in hard-to-reach parts of the human body. Lead author Ren Hao Soon of the Max Planck Institute for Intelligent Systems was watching a YouTube video when he "stumbled across the animal and saw it was a good fit." Soon needed a soft material that wouldn't cause harm inside the human body, with the advantages of a hard material that could, for example, conduct electricity. The Pangolin's unique structure was perfect. The tiny robots are still in their initial stages, but they could be made for as little as 10 euros each. "Looking to nature to solve these kinds of problems is natural," said Soon. "Every single design part of an animal serves a particular function. It’s very elegant."
HERZLIYA, Israel — Nearly 7,000 miles away in Portland, Oregon, venture capitalist George Djuric said he was compelled to visit Israel during the country's war with Palestinian militant group Hamas and to pledge support for the high-tech sector. Djuric, chief technology officer at yVentures who arrived in the United States as a 3-year-old refugee from Bosnia during the Bosnian war in the mid-1990s, this week joined some 70 other U.S. tech executives and investors on a trip to Israel. "Coming here is a chance to stand in solidarity with Israel and also support the tech ecosystem, which is the world's second largest after Silicon Valley," he said. "As a technology fund, it makes sense for us to be here." Although not Jewish, Djuric said he was drawn to Israel by the state's resiliency and as someone whose family's views were shaped by war. "I was horrified by what happened on October 7 and I was equally horrified the next day when I saw people demonstrating in support of what happened," he said, referring to the October 7 attack on Israel launched by Hamas. Investors and analysts had predicted the conflict with the Palestinians would derail a fragile recovery in high-tech, which accounts for more than half of Israel's exports and nearly a fifth of its overall economic output. Funding had already dropped sharply amid a global slowdown and a divisive government judicial overhaul when the war took its toll on the economy. Growth, on pace for a 3.4% clip this year, has fallen to an expected 2% with the outlook at least as grim. At least 15% of the tech workforce has been called up for military reserve duty. Yet, even as the war rages, tech funding deals are still getting done, albeit at a slower pace. Startups have raised more than $6 billion in 2023 compared with $16 billion in 2022. On Tuesday, ScaleOps, a startup specializing in cloud resource management, announced a $21.5 million funding round. Last week, cyber startup Zero Networks, which prevents attackers from spreading in corporate networks, raised $20 million. 'Long-term bullish on Israel' Ron Miasnik, of Bain Capital Ventures who co-organized the delegation, said he had expected Israeli startups to go on drawing large sums. He said he believed the country's economy would ultimately bounce back. "It doesn't matter to us whether the economic rebound takes three months, six months, nine months or 12 months," he said. "We're long-term bullish on Israel." Miasnik said the idea of the trip emerged from watching other solidarity groups, such as religious ones. "We felt the (U.S.) tech and the venture capital community, which is so heavily integrated within Israel, was missing," he said. Initially, it was supposed to be just 15 people but, he said, hundreds of people showed interest. They included CEOs and senior executives of U.S.-based tech and VC funds from Meetup.com, Apollo, TPG, Susquehanna Growth Equity, Mastercard, John Deere and Harvard University's endowment investment fund. In addition to meeting local investors and startups, they met Israeli leaders and families of hostages still held captive in Gaza and toured border towns hit by the October 7 attack. Bain has a number of investments in Israel, including Redis Labs, in which the fund has invested more than $100 million, and cybersecurity firm Armis, and Miasnik said he was seeking to add more Israeli cybersecurity startups to its portfolio. Similarly, Danny Schultz, managing director of New York-based Gotham Ventures said he was looking to invest in 10 to 20 Israeli growth stage startups, mainly in fintech, in the next three to five years. "At the point that Israeli CEOs need more capital, they also need relationships across the ocean in the U.S. and Europe to really help build their companies," he said. Joy Marcus co-founded a new VC fund called The 98 and only invests in "women-led technology businesses that are disrupting industry." "I am tortured by the war. ... So I am here to support Israel first and foremost," she said. "And I am also very interested in investing in some Israeli women."
NEW YORK — Artists under siege by artificial intelligence that studies their work and then replicates their styles, have teamed with university researchers to stymie such copycat activity. U.S. illustrator Paloma McClain went into defense mode after learning that several AI models had been trained using her art, with no credit or compensation sent her way. "It bothered me," McClain told AFP. "I believe truly meaningful technological advancement is done ethically and elevates all people instead of functioning at the expense of others," she said. The artist turned to free software called Glaze created by researchers at the University of Chicago. Glaze essentially outthinks AI models when it comes to how they train, tweaking pixels in ways that are indiscernible to human viewers but which make a digitized piece of art appear dramatically different to AI. "We're basically providing technical tools to help protect human creators against invasive and abusive AI models," said Ben Zhao, a professor of computer science on the Glaze team. Created in just four months, Glaze spun off technology used to disrupt facial recognition systems. "We were working at super-fast speed because we knew the problem was serious," Zhao said of rushing to defend artists from software imitators. "A lot of people were in pain." Generative AI giants have agreements to use data for training in some cases, but the majority of digital images, audio, and text used to shape the way supersmart software thinks has been scraped from the internet without explicit consent. Since its release in March, Glaze has been downloaded more than 1.6 million times, according to Zhao. Zhao's team is working on a Glaze enhancement called Nightshade that notches up defenses by confusing AI, say by getting it to interpret a dog as a cat. "I believe Nightshade will have a noticeable effect if enough artists use it and put enough poisoned images into the wild," McClain said, meaning they would be easily available online. "According to Nightshade's research, it wouldn't take as many poisoned images as one might think," she said. Zhao's team has been approached by several companies that want to use Nightshade, according to the Chicago academic. "The goal is for people to be able to protect their content, whether it's individual artists or companies with a lot of intellectual property," Zhao said. Viva Voce A startup called Spawning has developed Kudurru software that detects attempts to harvest large numbers of images from an online venue. An artist can then block access or send images that don't match what is being requested, tainting the pool of data being used to teach AI what is what, according to Spawning co-founder Jordan Meyer. More than 1,000 websites have been integrated into the Kudurru network. Spawning has also launched haveibeentrained.com, a website that features an online tool for finding out whether digitized works have been fed into an AI model and allow artists to opt out of such use in the future. As defenses ramp up for images, researchers at Washington University in Missouri have developed AntiFake software to thwart AI copying voices. AntiFake enriches digital recordings of people speaking, adding noises inaudible to people but which make it "impossible to synthesize a human voice," said Zhiyuan Yu, the Ph.D. student behind the project. The program aims to go beyond just stopping unauthorized training of AI to preventing the creation of "deepfakes" — bogus soundtracks or videos of celebrities, politicians, relatives, or others showing them doing or saying something they didn't. A popular podcast recently reached out to the AntiFake team for help stopping its productions from being hijacked, according to Zhiyuan Yu. The freely available software has so far been used for recordings of people speaking, but could also be applied to songs, the researcher said. "The best solution would be a world in which all data used for AI is subject to consent and payment," Meyer contended. "We hope to push developers in this direction."
MOSCOW — The head of a company that makes navigation systems for Russia's space program was arrested in Moscow and charged with major fraud, state media reported Friday. TASS news agency quoted an unidentified law enforcement official as saying that Yevgeny Fomichev had been interrogated and charged with large-scale fraud, which carries a prison term of up to 10 years and a fine of 1 million rubles ($10,972). TASS said Moscow's Basmanny District Court, which often handles high-profile cases, ordered Fomichev to be held in pretrial detention until Feb. 21 at the request of Russia's Investigative Committee, which deals with serious crimes. Fomichev is head of NPP Geophysics-Cosmos, a company whose website says it manufactures "optical electronic orientation and navigation devices for spacecraft." It says that almost all Russian spacecraft use its equipment. The website includes a nine-page anti-corruption policy that says management has a key role in creating a culture of zero-tolerance toward corruption. Russia's space program suffered a huge setback in August when its Luna-25 spacecraft smashed into the surface of the moon while attempting to land there. An investigation blamed a malfunction in an onboard control unit for the failure of Russia's first moon mission in 47 years.
washington — The U.S. Department of Commerce said Thursday that it would launch a survey of the U.S. semiconductor supply chain and national defense industrial base to address national security concerns from Chinese-sourced chips. The survey aims to identify how U.S. companies are sourcing so-called legacy chips — current-generation and mature-node semiconductors — as the department moves to award nearly $40 billion in subsidies for semiconductor chip manufacturing. The department said the survey, which will begin in January, aims to "reduce national security risks posed by" China and will focus on the use and sourcing of Chinese-manufactured legacy chips in the supply chains of critical U.S. industries. A report released by the department on Thursday said China had provided the Chinese semiconductor industry with an estimated $150 billion in subsidies in the last decade, creating "an unlevel global playing field for U.S. and other foreign competitors." Commerce Secretary Gina Raimondo said, "Over the last few years, we've seen potential signs of concerning practices from [China] to expand their firms' legacy chip production and make it harder for U.S. companies to compete." China's embassy in Washington said Thursday that the United States "has been stretching the concept of national security, abusing export control measures, engaging in discriminatory and unfair treatment against enterprises of other countries, and politicizing and weaponizing economic and sci-tech issues." Raimondo said last week that she expected her department to make about a dozen semiconductor chip funding awards within the next year, including multibillion-dollar announcements that could drastically reshape U.S. chip production. Her department made the first award from the program on December 11. The Commerce Department said the survey would also help promote a level playing field for legacy chip production. "Addressing non-market actions by foreign governments that threaten the U.S. legacy chip supply chain is a matter of national security," Raimondo added. U.S.-headquartered companies account for about half of the global semiconductor revenue but face intense competition supported by foreign subsidies, the department said. Its report said the cost of manufacturing semiconductors in the United States may be "30-45% higher than the rest of the world," and it called for long-term support for domestic fabrication construction. It added that the U.S. should enact "permanent provisions that incentivize steady construction and modernization of semiconductor fabrication facilities, such as the investment tax credit scheduled to end in 2027."
From ChatGPT to the impacts of machine learning on the music and film industry, academia and politics, generative artificial intelligence dominated technology news in 2023. Deana Mitchell takes a look.
CAPE CANAVERAL, Fla. — An international astronaut will join U.S. astronauts on the moon by decade's end under an agreement announced Wednesday by NASA and the White House. The news came as Vice President Kamala Harris convened a meeting in Washington of the National Space Council, the third such gathering under the Biden administration. There was no mention of who the international moonwalker might be or even what country would be represented. A NASA spokeswoman later said that crews would be assigned closer to the lunar-landing missions, and that no commitments had yet been made to another country. NASA has included international astronauts on trips to space for decades. Canadian Jeremy Hansen will fly around the moon a year or so from now with three U.S. astronauts. Another crew would actually land; it would be the first lunar touchdown by astronauts in more than a half-century. That's not likely to occur before 2027, according to the U.S. Government Accountability Office. All 12 moonwalkers during NASA's Apollo program of the 1960s and 1970s were U.S. citizens. The space agency's new moon exploration program is named Artemis after Apollo's mythological twin sister. Including international partners "is not only sincerely appreciated, but it is urgently needed in the world today," Hansen told the council. NASA has long stressed the need for global cooperation in space, establishing the Artemis Accords along with the U.S. State Department in 2020 to promote responsible behavior not just at the moon but everywhere in space. Representatives from all 33 countries that have signed the accords so far were expected at the space council's meeting in Washington. "We know from experience that collaboration on space delivers," said Secretary of State Antony Blinken, citing the Webb Space Telescope, a U.S., European and Canadian effort. Notably missing from the Artemis Accords: Russia and China, the only countries besides the U.S. to launch their own citizens into orbit. Russia is a partner with NASA in the International Space Station, along with Europe, Japan and Canada. Even earlier in the 1990s, the Russian and U.S. space agencies teamed up during the shuttle program to launch each other's astronauts to Russia's former orbiting Mir station. During Wednesday's meeting, Harris also announced new policies to ensure the safe use of space as more and more private companies and countries aim skyward. Among the issues that the U.S. is looking to resolve: the climate crisis and the growing amount of space junk around Earth. A 2021 anti-satellite missile test by Russia added more than 1,500 pieces of potentially dangerous orbiting debris, and Blinken joined others at the meeting in calling for all nations to end such destructive testing.
TOKYO — Toyota Motor's Daihatsu unit will halt shipments of all of its vehicles, Japan's biggest automaker said on Wednesday, after an investigation into a safety scandal found issues at 64 models, including almost two dozen sold under Toyota's brand. An independent panel has been investigating Daihatsu after it said in April it had rigged side-collision safety tests carried out for 88,000 small cars, most of those sold as Toyotas. But the latest revelations suggest the scope of the scandal is far greater than previously thought and could potentially tarnish the automakers' reputation for quality and safety. Daihatsu is Toyota's small-car unit and produces a number of the so-called "kei" smaller cars and trucks that are popular in Japan. The latest issues also impacted some Mazda and Subaru models sold in the domestic market and Toyota and Daihatsu models overseas, the panel found. Toyota said "fundamental reform" was needed to revitalize Daihatsu, as well as a review of certification operations. "This will be an extremely significant task that cannot be accomplished overnight," Toyota said in a statement. "It will require not only a review of management and business operations but also a review of the organization and structure." Toyota shares were flat on Wednesday afternoon, lagging a 1.6% rise in the broader market. Daihatsu was found to have cheated on safety tests of almost all models it currently has in production as well as some cars it made in the past, the Asahi newspaper previously reported. The issue emerged after Daihatsu said in April it had discovered the wrongly conducted tests after a whistleblower report. It had reported the issue to regulatory agencies and halted shipments of affected models. The following month, it said it had stopped sales of the Toyota Raize hybrid electric vehicle and its own Rocky model after also finding problems with testing for those models. Daihatsu produced 1.1 million vehicles over the first 10 months of the year, nearly 40% of those at overseas sites, according to Toyota data. It sold some 660,000 vehicles worldwide over that period and accounted for 7% of Toyota's sales. Toyota said on Wednesday that affected models included those for the southeast Asian markets of Thailand, Indonesia, Malaysia, Cambodia and Vietnam and central and South American countries of Mexico, Ecuador, Peru, Chile, Bolivia and Uruguay. Daihatsu is the latest safety issue to impact the Toyota group over the years. An engine data scandal at Toyota's truck- and bus-making unit, Hino Motors, in 2022 led to resignations and temporary pay cuts for some managers. In that case Hino admitted to falsifying data on some engines dating back to 2003, or at least a decade earlier than it originally indicated. In 2010 Toyota Chairman Akio Toyoda, then chief executive, was forced to testify before U.S. Congress due to a safety crisis involving faulty accelerators.
WASHINGTON — Blue Origin launched its first rocket in more than a year on Tuesday, reviving the U.S. company's fortunes with a successful return to space following an uncrewed crash in 2022. Though mission NS-24 carried a payload of science experiments, not people, it paves the way for Jeff Bezos' aerospace enterprise to resume taking wealthy thrill-seekers to the final frontier. The New Shepard suborbital rocket blasted off from the pad at Launch Site One, near Van Horn, Texas, at 10:42 a.m. After separating from the booster, the gumdrop-shaped capsule attained a peak altitude of 107 kilometers above sea level, well above the internationally recognized boundary of space known as the Karman line, which is 100 kilometers high. The booster then successfully landed vertically on the launchpad, against the majestic backdrop of the Sierra Diablo mountains, followed a few minutes later by the capsule floating to the desert floor on three giant parachutes. All in all, the mission lasted 10 minutes and 13 seconds. "Demand for New Shepard flights continues to grow, and we're looking forward to increasing our flight cadence in 2024," said Phil Joyce, the company's senior vice president. The science experiments onboard included one to demonstrate the operation of hydrogen fuel cell technology in microgravity, and another showing how water and gas move in a weightless environment. Future applications could include monitoring water quality for astronauts in space. On Sept. 12, 2022, a Blue Origin rocket became engulfed in flames shortly after launch. The capsule, fixed to the top of the rocket, successfully initiated an emergency separation sequence and floated safely to the ground on parachutes. The accident prompted a year-long probe by the Federal Aviation Administration, which found it was caused by the failure of an engine nozzle that experienced higher-than-expected operating temperatures. The regulator issued a set of corrective actions for Blue Origin to undertake before it could resume flying, including the redesign of certain engine parts. It confirmed Sunday that it had approved Blue Origin's application to fly again. In all, Blue Origin has carried out six crewed flights — some passengers were paying customers and others were guests — since July 2021, when Bezos himself took part in the first. While Blue Origin has been grounded, rival Virgin Galactic — the company founded by British billionaire Richard Branson — has pressed on, with five commercial flights this year. The two companies compete in the emerging space tourism sector, operating in suborbital space. While Blue Origin launches a small rocket vertically, Virgin Galactic uses a large carrier plane to gain altitude and then drop off a smaller, rocket-powered spaceplane that completes the journey to space. In both cases, passengers enjoy a few minutes of weightlessness and can view the curvature of the Earth through large windows. Virgin Galactic tickets were sold for between $200,000 to $450,000; Blue Origin does not publicly disclose its ticket prices. Blue Origin can boast the fact that nearly all of its rocket platform is reused, including the booster, capsule, engine, landing gear and parachutes. Its engine, meanwhile, is fueled by liquid oxygen and hydrogen, meaning the only byproduct during flight is water vapor, with no carbon emissions. Blue Origin is also developing a heavy rocket for commercial purposes called New Glenn, with the maiden flight planned for next year. This rocket, which measures 98 meters high, is designed to carry payloads of as much as 45 metric tons into low Earth orbit.
Researchers in Australia have developed a method to make hydrogen from seawater without a costly desalination process. This could mark a breakthrough in the production of clean hydrogen from a plentiful, eco-friendly source. VOA’s Julie Taboh has more.
LONDON — European Union authorities are looking into whether Elon Musk's online platform X breached tough new social media regulations in the first such investigation since the rules designed to make online content less toxic took effect. "Today we open formal infringement proceedings against @X" under the Digital Services Act, European Commissioner Thierry Breton said in a post on the platform Monday. "The Commission will now investigate X's systems and policies related to certain suspected infringements," spokesman Johannes Bahrke told a press briefing in Brussels. "It does not prejudge the outcome of the investigation." The investigation will look into whether X, formerly known as Twitter, failed to do enough to curb the spread of illegal content and whether measures to combat "information manipulation," especially through its Community Notes feature, was effective. The EU will also examine whether X was transparent enough with researchers and will look into suspicions that its user interface, including its blue check subscription service, has a "deceptive design." "X remains committed to complying with the Digital Services Act, and is cooperating with the regulatory process," the company said in a prepared statement. "It is important that this process remains free of political influence and follows the law. X is focused on creating a safe and inclusive environment for all users on our platform, while protecting freedom of expression, and we will continue to work tirelessly towards this goal." A raft of big tech companies faced stricter scrutiny after the EU's Digital Services Act took effect earlier this year, threatening penalties of up to 6% of their global revenue — which could amount to billions — or even a ban from the EU. The DSA is is a set of far-reaching rules designed to keep users safe online and stop the spread of harmful content that's either illegal, such as child sexual abuse or terrorism content, or violates a platform's terms of service, such as promotion of genocide or anorexia. The EU has already called out X as the worst place online for fake news, and officials have exhorted owner Musk, who bought the platform a year ago, to do more to clean it up. The European Commission quizzed X over its handling of hate speech, misinformation and violent terrorist content related to the Israel-Hamas war after the conflict erupted.
Lahore, Pakistan — Artificial rain was used for the first time in Pakistan on Saturday in a bid to combat hazardous levels of smog in the megacity of Lahore, the provincial government said. In the first experiment of its kind in the South Asian country, planes equipped with cloud seeding equipment flew over 10 areas of the city, often ranked one of the worst places globally for air pollution. The "gift" was provided by the United Arab Emirates, said caretaker chief minister of Punjab, Mohsin Naqvi. "Teams from the UAE, along with two planes, arrived here about 10 to 12 days ago. They used 48 flares to create the rain," he told the media. He said the team would know by Saturday night what effect the "artificial rain" had. The UAE has increasingly used cloud seeding, sometimes referred to as artificial rain or “blueskying,” to create rain in the arid expanse of the country. The weather modification involves releasing common salt — or a mixture of different salts — into clouds. The crystals encourage condensation to form as rain. It has been deployed in dozens of countries, including the United States, China and India. Even very modest rain is effective in bringing down pollution, experts say. Air pollution has worsened in Pakistan in recent years, as a mixture of low-grade diesel fumes, smoke from seasonal crop burn off and colder winter temperatures coalesce into stagnant clouds of smog. Lahore suffers the most from the toxic smog, choking the lungs of more than 11 million residents in Lahore during the winter season. Levels of PM2.5 pollutants — cancer-causing microparticles that enter the bloodstream through the lungs — were measured as hazardous in Lahore on Saturday at more than 66 times the World Health Organization's danger limits. Breathing the poisonous air has catastrophic health consequences. Prolonged exposure can trigger strokes, heart disease, lung cancer and respiratory diseases, according to the WHO. Successive governments have used various methods to reduce air pollution in Lahore, including spraying water on the roads, and weekend shutdowns of schools, factories and markets, with little or no success. When asked about a long-term strategy to combat smog, the chief minister said the government needs studies to formulate a plan.
Madrid — More than 80 Spanish media organizations are filing a $600 million lawsuit against Meta over what they say is unfair competition in a case that could be repeated across the European Union. The lawsuit is the latest front in a battle by legacy media against the dominance of tech giants at a time when the traditional media industry is in economic decline. Losing revenue to Silicon Valley companies means less money to invest in investigative journalism or fewer resources to fight back against disinformation. The case is the latest example of media globally seeking compensation from internet and social media platforms for use of their content. The Association of Media of Information (AMI), a consortium of Spanish media companies, claimed in the lawsuit that Meta allegedly violated EU data protection rules between 2018 and 2023, Reuters reported. The newspapers argue that Meta’s “massive” and “systematic” use of its Facebook, Instagram and WhatsApp platform gives it an unfair advantage of designing and offering personalized advertisements, which they say constitutes unfair competition. Irene Lanzaco, director general of AMI, told VOA it estimated the actions of Meta had cost Spanish newspapers and magazines $539.2 million in lost income between 2018 and 2023. “This loss of income has meant it is more difficult for the media to practice journalism, to pay its journalists, to mount investigations and to hold politicians to account for corruption,” she said. “It means that society becomes more polarized, and people become less involved with their communities if they do not know what is going on.” Analysts say this is an “innovative” strategy by legacy media against tech giants that is more designed to engage people outside the news business. Until now, traditional media cases against Silicon Valley centered on the theft of intellectual property from the news business, but the Spanish suit made a claim related to alleged theft of personal data. “Previously, all the cases that legacy media has brought have been about the piracy of intellectual property — ‘We report the news, and these people are putting it on their websites without paying for it,’” Kathy Kiely, the Lee Hills chair in Free Press Studies at the Missouri School of Journalism, told VOA. “But what this case is about is that these social media platforms have access to a lot of information about the audience to gain unfair advantage in advertising,” she said. The lawsuit was filed with a commercial court in Madrid, reported Reuters, which saw the court papers. Matt Pollard, a spokesman for Meta Platforms, told VOA, “We have not received the legal papers on this case, so we cannot comment. All we know about it is what we have read in the media.” The complainants include Prisa, which publishes Spain’s left-wing daily El País; Vocento, owner of ABC, a right-wing daily; and the Barcelona-based conservative daily La Vanguardia. They claim that Meta used personal data obtained without the express consent from clients in violation of the EU General Data Protection Regulation in force since May 2018, which demands that any website requests authorization to keep and use personal data. “Of course in any other EU country, the same legal procedure could be initiated,” as it concerns an alleged violation of European regulations,” Nicolas González Cuellar, a lawyer representing AMI, told Reuters. Kiely said the Spanish case may engage the broader public and policymakers, in Europe and beyond. “[This legal case] introduces a new strategy. It is not just about the survival of the local news organization. It is about privacy,” she said. “This engages people outside the news business in a way that piracy of the intellectual property does not.” The lawsuit is the latest attempt by media organizations who have struggled to make tech giants pay fair fees for using and sharing their content. The legal battle comes as the Reuters Institute’s 2023 Digital News Report found that tech platforms like Meta and Google had become a “running sore” for news publishers over the past decade. “Google and Facebook [now Meta] at their height accounted for just under half of online traffic to news sites,” the report said. “Although the so-called ‘duopoly’ remains hugely consequential, our report shows how this platform position is becoming a little less concentrated in many markets, with more providers competing." It added, “Digital audio and video are bringing new platforms into play, while some consumers have adopted less toxic and more private messaging networks for communications.” Spanish media scored a victory against Alphabet’s Google News service, which the government shut down in 2014 before its reopening in 2022 under new legislation allowing media outlets to negotiate fees directly with the tech giant. Last month, Google and the Canadian government reached an agreement in their dispute over the Online News Act, which would see Google continue to use Canadian news online in return for the company making annual payments to news companies of about $100 million. Radio Canada and CBC News reported last month that the Canadian federal government estimated earlier this year that Google’s compensation should amount to about $172 million, while Google estimated this value at $100 million. Canadian Prime Minister Justin Trudeau said the agreement was “very good news.” “After months of holding strong, of demonstrating our commitment to local journalism, to strong independent journalists getting paid for their work … Google has agreed to properly support journalists, including local journalism,” he said. Google said it would not have a mandatory negotiation model imposed on it for talks with the media in Canada. Instead, it preferred to deal with a single media group that would represent all media, allowing the group to limit its arbitration risk. Google had threatened to block Canadian news content on its platforms because of the legislation but did not. In contrast, Meta ended its talks with the Canadian government last summer and stopped distributing Canadian news on Facebook and Instagram. Last month, the Reuters Institute’s 2023 report said that 29% of Canadians used Facebook for news. Around 11% used Facebook Messenger, and 10% used Instagram for the same purpose.
Immigrants from Belarus, Ukraine and other Eastern European countries are actively exploring the American IT startup market. One immigrant-run venture capital firm is helping them find investments. Evgeny Maslov has the story, narrated by Anna Rice. Camera: Michael Eckels.
Detroit, Mich — Tesla is recalling more than 2 million vehicles across its model lineup to fix a defective system that’s supposed to ensure drivers are paying attention when they use Autopilot. Documents posted Wednesday by U.S. safety regulators say the company will send out a software update to fix the problems. The recall comes after a two-year investigation by the National Highway Traffic Safety Administration into a series of crashes that happened while the Autopilot partially automated driving system was in use. Some were deadly. The agency says its investigation found Autopilot's method of ensuring that drivers are paying attention can be inadequate and can lead to foreseeable misuse of the system. The recall covers nearly all of the vehicles Tesla sold in the U.S. and includes models Y, S, 3 and X produced between Oct. 5, 2012, and Dec. 7 of this year. The software update includes additional controls and alerts “to further encourage the driver to adhere to their continuous driving responsibility,” the documents said. The update was to be sent to certain affected vehicles on Tuesday, with the rest getting it at a later date, the documents said. Autopilot includes features called Autosteer and Traffic Aware Cruise Control, with Autosteer intended for use on limited access freeways when it’s not operating with a more sophisticated feature called Autosteer on City Streets. The software update apparently will limit where Autosteer can be used. “If the driver attempts to engage Autosteer when conditions are not met for engagement, the feature will alert the driver it is unavailable through visual and audible alerts, and Autosteer will not engage,” the recall documents said. Depending on a Tesla’s hardware, the added controls include “increasing prominence” of visual alerts, simplifying how Autosteer is turned on and off, additional checks on whether Autosteer is being used outside of controlled access roads and when approaching traffic control devices, “and eventual suspension from Autosteer use if the driver repeatedly fails to demonstrate continuous and sustained driving responsibility,” the documents say. Recall documents say that agency investigators met with Tesla starting in October to explain “tentative conclusions” about the fixing the monitoring system. Tesla, it said, did not agree with the agency's analysis but agreed to the recall on Dec. 5 in an effort to resolve the investigation. Auto safety advocates for years have been calling for stronger regulation of the driver monitoring system, which mainly detects whether a driver's hands are on the steering wheel. They have called for cameras to make sure a driver is paying attention, which are used by many other automakers with similar systems. Autopilot can steer, accelerate and brake automatically in its lane, but is a driver-assist system and cannot drive itself despite its name. Independent tests have found that the monitoring system is easy to fool, so much that drivers have been caught while driving drunk or even sitting in the back seat. In its defect report filed with the safety agency, Tesla said Autopilot's controls “may not be sufficient to prevent driver misuse.” A message was left early Wednesday seeking further comment from the Austin, Texas, company. Tesla says on its website that Autopilot and a more sophisticated Full Self Driving system cannot drive autonomously and are meant to help drivers who have to be ready to intervene at all times. Full Self Driving is being tested by Tesla owners on public roads. In a statement posted Monday on X, formerly Twitter, Tesla said safety is stronger when Autopilot is engaged. NHTSA has dispatched investigators to 35 Tesla crashes since 2016 in which the agency suspects the vehicles were running on an automated system. At least 17 people have been killed. The investigations are part of a larger probe by the NHTSA into multiple instances of Teslas using Autopilot crashing into parked emergency vehicles that are tending to other crashes. NHTSA has become more aggressive in pursuing safety problems with Teslas in the past year, announcing multiple recalls and investigations, including a recall of Full Self Driving software. In May, Transportation Secretary Pete Buttigieg, whose department includes NHTSA, said Tesla shouldn’t be calling the system Autopilot because it can’t drive itself. In its statement Wednesday, NHTSA said the Tesla investigation remains open “as we monitor the efficacy of Tesla’s remedies and continue to work with the automaker to ensure the highest level of safety.”
WASHINGTON — U.S. Commerce Secretary Gina Raimondo vowed Monday to take the “strongest action possible” in response to a semiconductor chip-making breakthrough in China that a House Foreign Affairs Committee said “almost certainly required the use of U.S. origin technology and should be an export control violation.” In an interview with Bloomberg News, Raimondo called Huawei Technology’s advanced processor in its Mate Pro 60 smartphone released in August “deeply concerning” and said the Commerce Department investigates such things vigorously. The United States has banned chip sales to Huawei, which reportedly used chips from China chip giant Semiconductor Manufacturing International Corp., or SMIC, in the phone that are 7 nanometers, a technology China has not been known as able to produce. Raimondo said the U.S. was also looking into the specifics of three new artificial intelligence accelerator chips that California-based Nvidia Corp. is developing for China. “We look at every spec of every new chip, obviously, to make sure it doesn’t violate the export controls,” she said. Nvidia came under U.S. scrutiny for designing China-specific chips that were just under new Commerce Department requirements announced in October for tighter export controls on advanced AI chips for civilian use that could have military applications. China’s Foreign Ministry responded to Raimondo’s comments Tuesday, saying the U.S. was “undermining the rights of Chinese companies” and contradicting the principles of a market economy. 'Almost certainly required US origin technology' The U.S. House Foreign Affairs Committee in a December 7 report criticized the Commerce Department’s Bureau of Industry and Security, or BIS, the regulatory body for regulating dual-use export controls. The report said Chinese chip giant “SMIC is producing 7 nanometer chips — advanced technology for semiconductors that had been only capable of development by TSMC, Intel and Samsung.” “Despite this breakthrough by SMIC, which almost certainly required the use of U.S. origin technology and should be an export control violation, BIS has not acted,” the 66-page report said. “We can no longer afford to avoid the truth: the unimpeded transfer of U.S. technology to China is one of the single-largest contributors to China’s emergence as one of the world’s premier scientific and technological powers.” Excessive approvals alleged Committee Chairman Michael McCaul said BIS had an excessive rate of approval for controlled technology transfers and lacked checks on end-use, raising serious questions about the current U.S. export control mechanism. “U.S. export control officials should adopt a presumption that all [Chinese] entities will divert technology to military or surveillance uses,” said McCaul’s report, but “currently, the overwhelming approval rates for licenses or exceptions for dual-use technology transfers to China indicate that licensing officials at BIS are likely presuming that items will be used only for their intended purposes.” According to BIS’s website, a key in determining whether an export license is needed from the Department of Commerce is knowing whether the item one intends to export has a specific Export Control Classification Number, or ECCN. All ECCNs are listed in the Commerce Control List, or CCL, which is divided into ten broad categories. The committee’s report said that “in 2020, nearly 98% of CCL items export to China went without a license,” and “in 2021, BIS approved nearly 90% of applications for the export of CCL items to China.” The report said that between 2016 and 2021, “the United States government’s two export control officers in China conducted on average only 55 end-user checks per year of the roughly 4,000 active licenses in China. Put another way, BIS likely verified less than 0.01% of all licenses, which represent less than 1% of all trade with China.” China skilled in avoiding controls But China is also skilled at avoiding U.S. export controls, analysts said. William Yu, an economist at UCLA Anderson Forecast, told VOA Mandarin in a phone interview that China can get banned chips through a third country. “For example, some countries in the Middle East set up a company in that country to buy these high-level chips from the United States. From there, one is transferred back to China,” Yu said. Thomas Duesterberg, a senior fellow at the Hudson Institute, told VOA Mandarin in a phone interview that the Commerce Department’s BIS has a hard job. “If you forbid technology from going to one company in China, the Chinese are experts at creating another company or just moving the company to a new address and disguising its name to try to evade the controls. China is a big country and there's a lot of technology that is at stake here,” he said. “It's true on the one hand that BIS has been successful in some areas, such as advanced semiconductors in conjunction with denial of Chinese ability to buy American technology companies,” said Duesterberg. “But it's also true as the [House Foreign Affairs Committee] report emphasizes that a lot of activities that policymakers would like to restrict is not being done.” Insufficient resources or political will? Despite its huge responsibility to ensure that the United States stays ahead in the escalating U.S.-China science and technology competition, the Commerce Department’s BIS is small, employing just over 300 people. At the annual Reagan National Defense Forum on December 2, Secretary Raimondo lamented that BIS “has the same budget today as it did a decade ago” despite the increasing challenges and workload, reported Breaking Defense, a New York-based online publication on global defense and politics. U.S. Representatives Elise Stefanik, Mike Gallagher, who is chairman of the House Select Committee on the Chinese Communist Party, and McCaul released a joint response to Raimondo's call for additional funds for the BIS, saying resources alone would not resolve export control shortcomings. Raimondo also warned chip companies that the U.S. would further tighten controls to prevent cutting edge AI technology from going to Beijing. “The threat from China is large and growing,” she said in an interview to CNBC at the December 2 forum. “China wants access to our most sophisticated semiconductors, and we can’t afford to give them that access. We’re not just going to deny a single company in China, we’re going to deny the whole country access to our cutting-edge semiconductors.”
European Union officials worked into the late hours last week hammering out an agreement on world-leading rules meant to govern the use of artificial intelligence in the 27-nation bloc. The Artificial Intelligence Act is the latest set of regulations designed to govern technology in Europe — that may be destined to have global impact. Here's a closer look at the AI rules: What is the AI act and how does it work? The AI Act takes a "risk-based approach" to products or services that use artificial intelligence and focuses on regulating uses of AI rather than the technology. The legislation is designed to protect democracy, the rule of law and fundamental rights like freedom of speech, while still encouraging investment and innovation. The riskier an AI application is, the stiffer the rules. Those that pose limited risk, such as content recommendation systems or spam filters, would have to follow only light rules such as revealing that they are powered by AI. High-risk systems, such as medical devices, face tougher requirements like using high-quality data and providing clear information to users. Some AI uses are banned because they're deemed to pose an unacceptable risk, like social scoring systems that govern how people behave, some types of predictive policing and emotion recognition systems in school and workplaces. People in public can't have their faces scanned by police using AI-powered remote "biometric identification" systems, except for serious crimes like kidnapping or terrorism. The AI Act won't take effect until two years after final approval from European lawmakers, expected in a rubber-stamp vote in early 2024. Violations could draw fines of up to 35 million euros ($38 million) or 7% of a company's global revenue. How does the AI act affect the rest of the world? The AI Act will apply to the EU's nearly 450 million residents, but experts say its impact could be felt far beyond because of Brussels' leading role in drawing up rules that act as a global standard. The EU has played the role before with previous tech directives, most notably mandating a common charging plug that forced Apple to abandon its in-house Lightning cable. While many other countries are figuring out whether and how they can rein in AI, the EU's comprehensive regulations are poised to serve as a blueprint. "The AI Act is the world's first comprehensive, horizontal and binding AI regulation that will not only be a game-changer in Europe but will likely significantly add to the global momentum to regulate AI across jurisdictions," said Anu Bradford, a Columbia Law School professor who's an expert on EU law and digital regulation. "It puts the EU in a unique position to lead the way and show to the world that AI can be governed, and its development can be subjected to democratic oversight," she said. Even what the law doesn't do could have global repercussions, rights groups said. By not pursuing a full ban on live facial recognition, Brussels has "in effect greenlighted dystopian digital surveillance in the 27 EU Member States, setting a devastating precedent globally," Amnesty International said. The partial ban is "a hugely missed opportunity to stop and prevent colossal damage to human rights, civil space and rule of law that are already under threat through the EU." Amnesty also decried lawmakers' failure to ban the export of AI technologies that can harm human rights — including for use in social scoring, something China does to reward obedience to the state through surveillance. What are other countries doing about AI regulation? The world's two major AI powers, the U.S. and China, also have started the ball rolling on their own rules. U.S. President Joe Biden signed a sweeping executive order on AI in October, which is expected to be bolstered by legislation and global agreements. It requires leading AI developers to share safety test results and other information with the government. Agencies will create standards to ensure AI tools are safe before public release and issue guidance to label AI-generated content. Biden's order builds on voluntary commitments made earlier by technology companies including Amazon, Google, Meta, Microsoft to make sure their products are safe before they're released. China, meanwhile, has released " interim measures " for managing generative AI, which applies to text, pictures, audio, video and other content generated for people inside China. President Xi Jinping has also proposed a Global AI Governance Initiative, calling for an open and fair environment for AI development. How will the AI act affect ChatGPT? The spectacular rise of OpenAI's ChatGPT showed that the technology was making dramatic advances and forced European policymakers to update their proposal. The AI Act includes provisions for chatbots and other so-called general purpose AI systems that can do many different tasks, from composing poetry to creating video and writing computer code. Officials took a two-tiered approach, with most general-purpose systems facing basic transparency requirements like disclosing details about their data governance and, in a nod to the EU's environmental sustainability efforts, how much energy they used to train the models on vast troves of written works and images scraped off the internet. They also need to comply with EU copyright law and summarize the content they used for training. Stricter rules are in store for the most advanced AI systems with the most computing power, which pose "systemic risks" that officials want to stop spreading to services that other software developers build on top.
Lawmakers and parents are blaming social media platforms for contributing to mental health problems in young people. A group of U.S. states is suing the owner of Instagram and Facebook for promoting their platforms to children despite knowing some of the psychological harms and safety risks they pose. From New York, VOA's Tina Trinh reports that a cause-and-effect relationship between social media and mental health may not be so clear.
U.S. chipmaker Nvidia's chief executive said on Monday the company will expand its partnership with Vietnam's top tech firms and support the country in training talent for developing artificial intelligence and digital infrastructure. Nvidia, which has already invested $250 million in Vietnam, has so far partnered with leading tech companies to deploy AI in the cloud, automotive and healthcare industries, a document published by the White House in September showed when Washington upgraded diplomatic relations with Vietnam. "Vietnam is already our partner as we have millions of clients here," Jensen Huang, Nvdia's CEO said at an event in Hanoi in his first visit to the country. "Vietnam and Nvidia will deepen our relations, with Viettel, FPT, Vingroup, VNG being the partners Nvidia looks to expand partnership with," Huang said, adding Nvidia would support Vietnam's artificial training and infrastructure. Reuters reported last week Nvidia was set to discuss cooperation deals on semiconductors with Vietnamese tech companies and authorities in a meeting on Monday. Huang's visit comes at a time when Vietnam is trying to expand into chip designing and possibly chip-making as trade tensions between the United States and China create opportunities for Vietnam in the industry. At Monday's event, Vietnam's investment minister Nguyen Chi Dzung said the country had been preparing mechanisms and incentives to attract investment projects in the semiconductor and artificial intelligence industries. Dzung also asked Nvidia to consider setting up a research and development facility in the country following Huang's proposal to set up a base in Vietnam, after his meeting with the Vietnamese Prime Minister Pham Minh Chinh on Sunday.