Technology

Subscribe to Technology feed Technology
science-technology
Updated: 1 hour 7 min ago

Spanish Newspapers Fight Meta in Unfair Competition Case

Wed, 12/13/2023 - 20:53
Madrid — More than 80 Spanish media organizations are filing a $600 million lawsuit against Meta over what they say is unfair competition in a case that could be repeated across the European Union. The lawsuit is the latest front in a battle by legacy media against the dominance of tech giants at a time when the traditional media industry is in economic decline. Losing revenue to Silicon Valley companies means less money to invest in investigative journalism or fewer resources to fight back against disinformation. The case is the latest example of media globally seeking compensation from internet and social media platforms for use of their content. The Association of Media of Information (AMI), a consortium of Spanish media companies, claimed in the lawsuit that Meta allegedly violated EU data protection rules between 2018 and 2023, Reuters reported. The newspapers argue that Meta’s “massive” and “systematic” use of its Facebook, Instagram and WhatsApp platform gives it an unfair advantage of designing and offering personalized advertisements, which they say constitutes unfair competition. Irene Lanzaco, director general of AMI, told VOA it estimated the actions of Meta had cost Spanish newspapers and magazines $539.2 million in lost income between 2018 and 2023. “This loss of income has meant it is more difficult for the media to practice journalism, to pay its journalists, to mount investigations and to hold politicians to account for corruption,” she said. “It means that society becomes more polarized, and people become less involved with their communities if they do not know what is going on.” Analysts say this is an “innovative” strategy by legacy media against tech giants that is more designed to engage people outside the news business. Until now, traditional media cases against Silicon Valley centered on the theft of intellectual property from the news business, but the Spanish suit made a claim related to alleged theft of personal data. “Previously, all the cases that legacy media has brought have been about the piracy of intellectual property — ‘We report the news, and these people are putting it on their websites without paying for it,’” Kathy Kiely, the Lee Hills chair in Free Press Studies at the Missouri School of Journalism, told VOA. “But what this case is about is that these social media platforms have access to a lot of information about the audience to gain unfair advantage in advertising,” she said. The lawsuit was filed with a commercial court in Madrid, reported Reuters, which saw the court papers. Matt Pollard, a spokesman for Meta Platforms, told VOA, “We have not received the legal papers on this case, so we cannot comment. All we know about it is what we have read in the media.” The complainants include Prisa, which publishes Spain’s left-wing daily El País; Vocento, owner of ABC, a right-wing daily; and the Barcelona-based conservative daily La Vanguardia. They claim that Meta used personal data obtained without the express consent from clients in violation of the EU General Data Protection Regulation in force since May 2018, which demands that any website requests authorization to keep and use personal data. “Of course in any other EU country, the same legal procedure could be initiated,” as it concerns an alleged violation of European regulations,” Nicolas González Cuellar, a lawyer representing AMI, told Reuters. Kiely said the Spanish case may engage the broader public and policymakers, in Europe and beyond. “[This legal case] introduces a new strategy. It is not just about the survival of the local news organization. It is about privacy,” she said. “This engages people outside the news business in a way that piracy of the intellectual property does not.” The lawsuit is the latest attempt by media organizations who have struggled to make tech giants pay fair fees for using and sharing their content. The legal battle comes as the Reuters Institute’s 2023 Digital News Report found that tech platforms like Meta and Google had become a “running sore” for news publishers over the past decade. “Google and Facebook [now Meta] at their height accounted for just under half of online traffic to news sites,” the report said.  “Although the so-called ‘duopoly’ remains hugely consequential, our report shows how this platform position is becoming a little less concentrated in many markets, with more providers competing." It added, “Digital audio and video are bringing new platforms into play, while some consumers have adopted less toxic and more private messaging networks for communications.” Spanish media scored a victory against Alphabet’s Google News service, which the government shut down in 2014 before its reopening in 2022 under new legislation allowing media outlets to negotiate fees directly with the tech giant. Last month, Google and the Canadian government reached an agreement in their dispute over the Online News Act, which would see Google continue to use Canadian news online in return for the company making annual payments to news companies of about $100 million. Radio Canada and CBC News reported last month that the Canadian federal government estimated earlier this year that Google’s compensation should amount to about $172 million, while Google estimated this value at $100 million. Canadian Prime Minister Justin Trudeau said the agreement was “very good news.” “After months of holding strong, of demonstrating our commitment to local journalism, to strong independent journalists getting paid for their work … Google has agreed to properly support journalists, including local journalism,” he said. Google said it would not have a mandatory negotiation model imposed on it for talks with the media in Canada. Instead, it preferred to deal with a single media group that would represent all media, allowing the group to limit its arbitration risk. Google had threatened to block Canadian news content on its platforms because of the legislation but did not. In contrast, Meta ended its talks with the Canadian government last summer and stopped distributing Canadian news on Facebook and Instagram. Last month, the Reuters Institute’s 2023 report said that 29% of Canadians used Facebook for news. Around 11% used Facebook Messenger, and 10% used Instagram for the same purpose.

Eastern European Startups Come to US Searching for Opportunities

Wed, 12/13/2023 - 15:55
Immigrants from Belarus, Ukraine and other Eastern European countries are actively exploring the American IT startup market. One immigrant-run venture capital firm is helping them find investments. Evgeny Maslov has the story, narrated by Anna Rice. Camera: Michael Eckels.

Tesla Recalls Over 2 Million Vehicles to Fix Defective System that Monitors Drivers Using Autopilot

Wed, 12/13/2023 - 13:10
Detroit, Mich — Tesla is recalling more than 2 million vehicles across its model lineup to fix a defective system that’s supposed to ensure drivers are paying attention when they use Autopilot. Documents posted Wednesday by U.S. safety regulators say the company will send out a software update to fix the problems. The recall comes after a two-year investigation by the National Highway Traffic Safety Administration into a series of crashes that happened while the Autopilot partially automated driving system was in use. Some were deadly. The agency says its investigation found Autopilot's method of ensuring that drivers are paying attention can be inadequate and can lead to foreseeable misuse of the system. The recall covers nearly all of the vehicles Tesla sold in the U.S. and includes models Y, S, 3 and X produced between Oct. 5, 2012, and Dec. 7 of this year. The software update includes additional controls and alerts “to further encourage the driver to adhere to their continuous driving responsibility,” the documents said. The update was to be sent to certain affected vehicles on Tuesday, with the rest getting it at a later date, the documents said. Autopilot includes features called Autosteer and Traffic Aware Cruise Control, with Autosteer intended for use on limited access freeways when it’s not operating with a more sophisticated feature called Autosteer on City Streets.  The software update apparently will limit where Autosteer can be used. “If the driver attempts to engage Autosteer when conditions are not met for engagement, the feature will alert the driver it is unavailable through visual and audible alerts, and Autosteer will not engage,” the recall documents said.  Depending on a Tesla’s hardware, the added controls include “increasing prominence” of visual alerts, simplifying how Autosteer is turned on and off, additional checks on whether Autosteer is being used outside of controlled access roads and when approaching traffic control devices, “and eventual suspension from Autosteer use if the driver repeatedly fails to demonstrate continuous and sustained driving responsibility,” the documents say. Recall documents say that agency investigators met with Tesla starting in October to explain “tentative conclusions” about the fixing the monitoring system. Tesla, it said, did not agree with the agency's analysis but agreed to the recall on Dec. 5 in an effort to resolve the investigation. Auto safety advocates for years have been calling for stronger regulation of the driver monitoring system, which mainly detects whether a driver's hands are on the steering wheel. They have called for cameras to make sure a driver is paying attention, which are used by many other automakers with similar systems. Autopilot can steer, accelerate and brake automatically in its lane, but is a driver-assist system and cannot drive itself despite its name. Independent tests have found that the monitoring system is easy to fool, so much that drivers have been caught while driving drunk or even sitting in the back seat. In its defect report filed with the safety agency, Tesla said Autopilot's controls “may not be sufficient to prevent driver misuse.” A message was left early Wednesday seeking further comment from the Austin, Texas, company. Tesla says on its website that Autopilot and a more sophisticated Full Self Driving system cannot drive autonomously and are meant to help drivers who have to be ready to intervene at all times. Full Self Driving is being tested by Tesla owners on public roads. In a statement posted Monday on X, formerly Twitter, Tesla said safety is stronger when Autopilot is engaged. NHTSA has dispatched investigators to 35 Tesla crashes since 2016 in which the agency suspects the vehicles were running on an automated system. At least 17 people have been killed. The investigations are part of a larger probe by the NHTSA into multiple instances of Teslas using Autopilot crashing into parked emergency vehicles that are tending to other crashes. NHTSA has become more aggressive in pursuing safety problems with Teslas in the past year, announcing multiple recalls and investigations, including a recall of Full Self Driving software. In May, Transportation Secretary Pete Buttigieg, whose department includes NHTSA, said Tesla shouldn’t be calling the system Autopilot because it can’t drive itself. In its statement Wednesday, NHTSA said the Tesla investigation remains open “as we monitor the efficacy of Tesla’s remedies and continue to work with the automaker to ensure the highest level of safety.” 

US Commerce Secretary Vows 'Strongest Action' on Huawei Chip Issue

Tue, 12/12/2023 - 23:31
WASHINGTON — U.S. Commerce Secretary Gina Raimondo vowed Monday to take the “strongest action possible” in response to a semiconductor chip-making breakthrough in China that a House Foreign Affairs Committee said “almost certainly required the use of U.S. origin technology and should be an export control violation.” In an interview with Bloomberg News, Raimondo called Huawei Technology’s advanced processor in its Mate Pro 60 smartphone released in August “deeply concerning” and said the Commerce Department investigates such things vigorously. The United States has banned chip sales to Huawei, which reportedly used chips from China chip giant Semiconductor Manufacturing International Corp., or SMIC, in the phone that are 7 nanometers, a technology China has not been known as able to produce. Raimondo said the U.S. was also looking into the specifics of three new artificial intelligence accelerator chips that California-based Nvidia Corp. is developing for China. “We look at every spec of every new chip, obviously, to make sure it doesn’t violate the export controls,” she said. Nvidia came under U.S. scrutiny for designing China-specific chips that were just under new Commerce Department requirements announced in October for tighter export controls on advanced AI chips for civilian use that could have military applications. China’s Foreign Ministry responded to Raimondo’s comments Tuesday, saying the U.S. was “undermining the rights of Chinese companies” and contradicting the principles of a market economy. 'Almost certainly required US origin technology' The U.S. House Foreign Affairs Committee in a December 7 report criticized the Commerce Department’s Bureau of Industry and Security, or BIS, the regulatory body for regulating dual-use export controls. The report said Chinese chip giant “SMIC is producing 7 nanometer chips — advanced technology for semiconductors that had been only capable of development by TSMC, Intel and Samsung.” “Despite this breakthrough by SMIC, which almost certainly required the use of U.S. origin technology and should be an export control violation, BIS has not acted,” the 66-page report said. “We can no longer afford to avoid the truth: the unimpeded transfer of U.S. technology to China is one of the single-largest contributors to China’s emergence as one of the world’s premier scientific and technological powers.” Excessive approvals alleged Committee Chairman Michael McCaul said BIS had an excessive rate of approval for controlled technology transfers and lacked checks on end-use, raising serious questions about the current U.S. export control mechanism. “U.S. export control officials should adopt a presumption that all [Chinese] entities will divert technology to military or surveillance uses,” said McCaul’s report, but “currently, the overwhelming approval rates for licenses or exceptions for dual-use technology transfers to China indicate that licensing officials at BIS are likely presuming that items will be used only for their intended purposes.” According to BIS’s website, a key in determining whether an export license is needed from the Department of Commerce is knowing whether the item one intends to export has a specific Export Control Classification Number, or ECCN. All ECCNs are listed in the Commerce Control List, or CCL, which is divided into ten broad categories. The committee’s report said that “in 2020, nearly 98% of CCL items export to China went without a license,” and “in 2021, BIS approved nearly 90% of applications for the export of CCL items to China.” The report said that between 2016 and 2021, “the United States government’s two export control officers in China conducted on average only 55 end-user checks per year of the roughly 4,000 active licenses in China. Put another way, BIS likely verified less than 0.01% of all licenses, which represent less than 1% of all trade with China.” China skilled in avoiding controls But China is also skilled at avoiding U.S. export controls, analysts said. William Yu, an economist at UCLA Anderson Forecast, told VOA Mandarin in a phone interview that China can get banned chips through a third country. “For example, some countries in the Middle East set up a company in that country to buy these high-level chips from the United States. From there, one is transferred back to China,” Yu said. Thomas Duesterberg, a senior fellow at the Hudson Institute, told VOA Mandarin in a phone interview that the Commerce Department’s BIS has a hard job. “If you forbid technology from going to one company in China, the Chinese are experts at creating another company or just moving the company to a new address and disguising its name to try to evade the controls. China is a big country and there's a lot of technology that is at stake here,” he said. “It's true on the one hand that BIS has been successful in some areas, such as advanced semiconductors in conjunction with denial of Chinese ability to buy American technology companies,” said Duesterberg. “But it's also true as the [House Foreign Affairs Committee] report emphasizes that a lot of activities that policymakers would like to restrict is not being done.” Insufficient resources or political will? Despite its huge responsibility to ensure that the United States stays ahead in the escalating U.S.-China science and technology competition, the Commerce Department’s BIS is small, employing just over 300 people. At the annual Reagan National Defense Forum on December 2, Secretary Raimondo lamented that BIS “has the same budget today as it did a decade ago” despite the increasing challenges and workload, reported Breaking Defense, a New York-based online publication on global defense and politics. U.S. Representatives Elise Stefanik, Mike Gallagher, who is chairman of the House Select Committee on the Chinese Communist Party, and McCaul released a joint response to Raimondo's call for additional funds for the BIS, saying resources alone would not resolve export control shortcomings. Raimondo also warned chip companies that the U.S. would further tighten controls to prevent cutting edge AI technology from going to Beijing. “The threat from China is large and growing,” she said in an interview to CNBC at the December 2 forum. “China wants access to our most sophisticated semiconductors, and we can’t afford to give them that access. We’re not just going to deny a single company in China, we’re going to deny the whole country access to our cutting-edge semiconductors.”

EU Establishes World-Leading AI Rules, Could That Affect Everyone?

Tue, 12/12/2023 - 00:23
European Union officials worked into the late hours last week hammering out an agreement on world-leading rules meant to govern the use of artificial intelligence in the 27-nation bloc. The Artificial Intelligence Act is the latest set of regulations designed to govern technology in Europe — that may be destined to have global impact. Here's a closer look at the AI rules: What is the AI act and how does it work? The AI Act takes a "risk-based approach" to products or services that use artificial intelligence and focuses on regulating uses of AI rather than the technology. The legislation is designed to protect democracy, the rule of law and fundamental rights like freedom of speech, while still encouraging investment and innovation. The riskier an AI application is, the stiffer the rules. Those that pose limited risk, such as content recommendation systems or spam filters, would have to follow only light rules such as revealing that they are powered by AI. High-risk systems, such as medical devices, face tougher requirements like using high-quality data and providing clear information to users. Some AI uses are banned because they're deemed to pose an unacceptable risk, like social scoring systems that govern how people behave, some types of predictive policing and emotion recognition systems in school and workplaces. People in public can't have their faces scanned by police using AI-powered remote "biometric identification" systems, except for serious crimes like kidnapping or terrorism. The AI Act won't take effect until two years after final approval from European lawmakers, expected in a rubber-stamp vote in early 2024. Violations could draw fines of up to 35 million euros ($38 million) or 7% of a company's global revenue. How does the AI act affect the rest of the world? The AI Act will apply to the EU's nearly 450 million residents, but experts say its impact could be felt far beyond because of Brussels' leading role in drawing up rules that act as a global standard. The EU has played the role before with previous tech directives, most notably mandating a common charging plug that forced Apple to abandon its in-house Lightning cable. While many other countries are figuring out whether and how they can rein in AI, the EU's comprehensive regulations are poised to serve as a blueprint. "The AI Act is the world's first comprehensive, horizontal and binding AI regulation that will not only be a game-changer in Europe but will likely significantly add to the global momentum to regulate AI across jurisdictions," said Anu Bradford, a Columbia Law School professor who's an expert on EU law and digital regulation. "It puts the EU in a unique position to lead the way and show to the world that AI can be governed, and its development can be subjected to democratic oversight," she said. Even what the law doesn't do could have global repercussions, rights groups said. By not pursuing a full ban on live facial recognition, Brussels has "in effect greenlighted dystopian digital surveillance in the 27 EU Member States, setting a devastating precedent globally," Amnesty International said. The partial ban is "a hugely missed opportunity to stop and prevent colossal damage to human rights, civil space and rule of law that are already under threat through the EU." Amnesty also decried lawmakers' failure to ban the export of AI technologies that can harm human rights — including for use in social scoring, something China does to reward obedience to the state through surveillance. What are other countries doing about AI regulation? The world's two major AI powers, the U.S. and China, also have started the ball rolling on their own rules. U.S. President Joe Biden signed a sweeping executive order on AI in October, which is expected to be bolstered by legislation and global agreements. It requires leading AI developers to share safety test results and other information with the government. Agencies will create standards to ensure AI tools are safe before public release and issue guidance to label AI-generated content. Biden's order builds on voluntary commitments made earlier by technology companies including Amazon, Google, Meta, Microsoft to make sure their products are safe before they're released. China, meanwhile, has released " interim measures " for managing generative AI, which applies to text, pictures, audio, video and other content generated for people inside China. President Xi Jinping has also proposed a Global AI Governance Initiative, calling for an open and fair environment for AI development. How will the AI act affect ChatGPT? The spectacular rise of OpenAI's ChatGPT showed that the technology was making dramatic advances and forced European policymakers to update their proposal. The AI Act includes provisions for chatbots and other so-called general purpose AI systems that can do many different tasks, from composing poetry to creating video and writing computer code. Officials took a two-tiered approach, with most general-purpose systems facing basic transparency requirements like disclosing details about their data governance and, in a nod to the EU's environmental sustainability efforts, how much energy they used to train the models on vast troves of written works and images scraped off the internet. They also need to comply with EU copyright law and summarize the content they used for training. Stricter rules are in store for the most advanced AI systems with the most computing power, which pose "systemic risks" that officials want to stop spreading to services that other software developers build on top.

US States Suing Meta Over Alleged Harm to Young Users

Mon, 12/11/2023 - 21:27
Lawmakers and parents are blaming social media platforms for contributing to mental health problems in young people. A group of U.S. states is suing the owner of Instagram and Facebook for promoting their platforms to children despite knowing some of the psychological harms and safety risks they pose. From New York, VOA's Tina Trinh reports that a cause-and-effect relationship between social media and mental health may not be so clear.

Nvidia to Expand Ties with Vietnam, Support AI Development

Mon, 12/11/2023 - 08:29
U.S. chipmaker Nvidia's chief executive said on Monday the company will expand its partnership with Vietnam's top tech firms and support the country in training talent for developing artificial intelligence and digital infrastructure. Nvidia, which has already invested $250 million in Vietnam, has so far partnered with leading tech companies to deploy AI in the cloud, automotive and healthcare industries, a document published by the White House in September showed when Washington upgraded diplomatic relations with Vietnam. "Vietnam is already our partner as we have millions of clients here," Jensen Huang, Nvdia's CEO said at an event in Hanoi in his first visit to the country. "Vietnam and Nvidia will deepen our relations, with Viettel, FPT, Vingroup, VNG being the partners Nvidia looks to expand partnership with," Huang said, adding Nvidia would support Vietnam's artificial training and infrastructure. Reuters reported last week Nvidia was set to discuss cooperation deals on semiconductors with Vietnamese tech companies and authorities in a meeting on Monday. Huang's visit comes at a time when Vietnam is trying to expand into chip designing and possibly chip-making as trade tensions between the United States and China create opportunities for Vietnam in the industry. At Monday's event, Vietnam's investment minister Nguyen Chi Dzung said the country had been preparing mechanisms and incentives to attract investment projects in the semiconductor and artificial intelligence industries. Dzung also asked Nvidia to consider setting up a research and development facility in the country following Huang's proposal to set up a base in Vietnam, after his meeting with the Vietnamese Prime Minister Pham Minh Chinh on Sunday.  

Elon Musk Restores X Account of Conspiracy Theorist Alex Jones

Sun, 12/10/2023 - 15:48
Elon Musk has restored the X account of conspiracy theorist Alex Jones, pointing to a poll on the social media platform formerly known as Twitter that came out in favor of the Infowars host who repeatedly called the 2012 Sandy Hook school shooting a hoax. It poses new uncertainty for advertisers, who have fled X over concerns about hate speech appearing alongside their ads, and is the latest divisive public personality to get back their banned account.  Musk posted a poll on Saturday asking if Jones should be reinstated, with the results showing 70% of those who responded in favor. Early Sunday, Musk tweeted, "The people have spoken and so it shall be." A few hours later, Jones' posts were visible again and he retweeted a post about his video game. He and his Infowars show had been permanently banned in 2018 for abusive behavior. Musk, who has described himself as a free speech absolutist, said the move was about protecting those rights. In response to a user who posted that "permanent account bans are antithetical to free speech," Musk wrote, "I find it hard to disagree with this point." The billionaire Tesla CEO also tweeted it's likely that Community Notes — X's crowd-sourced fact-checking service — "will respond rapidly to any AJ post that needs correction." It is a major turnaround for Musk, who previously said he wouldn't let Jones back on the platform despite repeated calls to do so. Last year, Musk pointed to the death of his first-born child and tweeted, "I have no mercy for anyone who would use the deaths of children for gain, politics or fame." Jones repeatedly has said on his show that the 2012 shooting at Sandy Hook Elementary School in Newtown, Connecticut, that killed 20 children and six educators never happened and was staged in an effort to tighten gun laws. Relatives of many of the victims sued Jones in Connecticut and Texas, winning nearly $1.5 billion in judgments against him. In October, a judge ruled that Jones could not use bankruptcy protection to avoid paying more than $1.1 billon of that debt. Relatives of the school shooting victims testified at the trials about being harassed and threatened by Jones' believers, who sent threats and even confronted the grieving families in person, accusing them of being "crisis actors" whose children never existed. Jones is appealing the judgments, saying he didn't get fair trials and his speech was protected by the First Amendment. Restoring Jones' account comes as Musk has seen a slew of big brands, including Disney and IBM, stop advertising on X after a report by liberal advocacy group Media Matters said ads were appearing alongside pro-Nazi content and white nationalist posts. They also were scared away after Musk himself endorsed an antisemitic conspiracy theory in response to a post on X. The Tesla CEO later apologized and visited Israel, where he toured a kibbutz attacked by Hamas militants and held talks with top Israeli leaders.  But he also has said advertisers are engaging in "blackmail" and, using a profanity, essentially told them to go away. "Don't advertise," Musk said in an on-stage interview late last month at The New York Times DealBook Summit. After buying Twitter last year, Musk said he was granting "amnesty" for suspended accounts and has since reinstated former President Donald Trump; Ye, the rapper formerly known as Kanye West, following two suspensions over antisemitic posts last year; and far-right Rep. Marjorie Taylor Greene, who was kicked off the platform for violating its COVID-19 misinformation policies. Trump, who was banned for encouraging the Jan. 6, 2021, Capitol insurrection, has his own social media site, Truth Social, and has only tweeted once since being allowed back on X.

Understanding Carbon Capture and Its Discussion at COP28

Sat, 12/09/2023 - 16:35
The future of fossil fuels is at the center of the United Nations climate summit in Dubai, where many activists, experts and nations are calling for an agreement to phase out the oil, gas and coal responsible for warming the planet. On the other side: energy companies and oil-rich nations with plans to keep drilling well into the future. In the background of those discussions are carbon capture and carbon removal, technologies most, if not all, producers are counting on to meet their pledges to get to net-zero emissions. Skeptics worry the technology is being oversold to allow the industry to maintain the status quo. “The industry needs to commit to genuinely helping the world meet its energy needs and climate goals — which means letting go of the illusion that implausibly large amounts of carbon capture are the solution,” International Energy Agency Executive Director Fatih Birol said before the start of talks. What is carbon capture? Many industrial facilities such as coal-fired power plants and ethanol plants produce carbon dioxide. To stop those planet-warming emissions from reaching the atmosphere, businesses can install equipment to separate that gas from all the other gases coming out of the smokestack and transport it to where it can be permanently stored underground. And even for industries trying to reduce emissions, some are likely to always produce some carbon, such as cement manufacturers that use a chemical process that releases CO2. “We call that a mitigation technology, a way to stop the increased concentrations of CO2 in the atmosphere,” said Karl Hausker, an expert on getting to net-zero emissions at World Resources Institute, a climate-focused nonprofit that supports sharp fossil fuel reductions along with a limited role for carbon capture. The captured carbon is concentrated into a form that can be transported in a vehicle or through a pipeline to a place where it can be injected underground for long-term storage. What is carbon removal? Then there's carbon removal. Instead of capturing carbon from a single, concentrated source, the objective is to remove carbon that's already in the atmosphere. This already happens when forests are restored, for example, but there's a push to deploy technology, too. One type directly captures it from the air, using chemicals to pull out carbon dioxide as air passes through. For some, carbon removal is essential during a global transition to clean energy that will take years. For example, despite notable gains for electric vehicles in some countries, gas-fired cars will be operating well into the future. And some industries, like shipping and aviation, are challenging to fully decarbonize. “We have to remove some of what’s in the atmosphere in addition to stopping the emissions,” said Jennifer Pett-Ridge, who leads the federally supported Lawrence Livermore National Laboratory’s carbon initiative in the United States, the world's second-leading emitter of greenhouse gases.   How is it going? Many experts say the technology to capture carbon and store it works, but it’s expensive, and it’s still in the early days of deployment. There are about 40 large-scale carbon capture projects in operation around the world capturing roughly 45 million metric tons of carbon dioxide each year, according to the International Energy Agency, or IEA. That’s a tiny amount — roughly 0.1% — of the 36.8 billion metric tons emitted globally as tallied by the Global Carbon Project. The IEA says the history of carbon capture “has largely been one of unmet expectations.” The group analyzed how the world can achieve net zero emissions, and its guide path relies heavily on lowering emissions by slashing fossil fuel use. Carbon capture is just a sliver of the solution — less than 10% — but despite its comparatively small role, its expansion is still behind schedule. The pace of new projects is picking up, but they face significant obstacles. In the United States, there’s opposition to CO2 pipelines that move carbon to storage sites. Safety is one concern; in 2020, a CO2 pipeline in Mississippi ruptured, releasing carbon dioxide that displaced breathable air near the ground and sent dozens of people to hospitals. The federal government is working on improving safety standards. Who supports carbon capture? The American Petroleum Institute says oil and gas will remain a critical energy source for decades, meaning that for the world to reduce its carbon emissions, rapidly expanding carbon capture technology is “key to cleaner energy use across the economy.” A check of most oil companies' plans to get to net-zero emissions also finds most of them relying on carbon capture in some way. The Biden administration wants more investment in carbon capture and removal, too, building off America's comparatively large spending compared with the rest of the world. But it’s an industry that needs subsidies to attract private financing. The Inflation Reduction Act makes tax benefits much more generous. Investors can get a $180-per-ton credit for removing carbon from the air and storing it underground, for example. And the Department of Energy has billions to support new projects. “What we are talking about now is taking a technology that has been proven and has been tested but applying it much more broadly and also applying it in sectors where there is a higher cost to deploy,” said Jessie Stolark, executive director of the Carbon Capture Coalition, an industry advocacy group. Investment is picking up. The EPA is considering dozens of applications for wells that can store carbon. And in places such as Louisiana and North Dakota, local leaders are fighting to attract projects and investment. Who is against it? Some environmentalists argue that fossil fuel companies are holding up carbon capture to distract from the need to quickly phase out oil, gas and coal. “The fossil fuel industry has proven itself to be dangerous and deceptive,” said Shaye Wolf, climate science director at the Center for Biological Diversity. There are other problems. Some projects haven’t met their carbon removal targets. A 2021 U.S. government accountability report said that of eight demonstration projects aimed at capturing and storing carbon from coal plants, just one had started operating at the time the report was published despite hundreds of millions of dollars in funding. Opponents also note that carbon capture can serve to prolong the life of a polluting plant that would otherwise shut down sooner. That can especially hurt poorer, minority communities that have long lived near heavily polluting facilities.

Europe Reaches Deal on World's First Comprehensive AI Rules

Sat, 12/09/2023 - 02:36
European Union negotiators clinched a deal Friday on the world's first comprehensive artificial intelligence rules, paving the way for legal oversight of technology used in popular generative AI services such as ChatGPT that have promised to transform everyday life and spurred warnings of existential dangers to humanity.  Negotiators from the European Parliament and the bloc's 27 member countries overcame big differences on controversial points, including generative AI and police use of facial recognition surveillance, to sign a tentative political agreement for the Artificial Intelligence Act.  "Deal!" tweeted European Commissioner Thierry Breton, just before midnight. "The EU becomes the very first continent to set clear rules for the use of AI."  The result came after marathon closed-door talks this week, with the initial session lasting 22 hours before a second round kicked off Friday morning.  Officials were under the gun to secure a political victory for the flagship legislation but were expected to leave the door open to further talks to work out the fine print, likely to bring more backroom lobbying.  Out front The EU took an early lead in the global race to draw up AI guardrails when it unveiled the first draft of its rulebook in 2021. The recent boom in generative AI, however, sent European officials scrambling to update a proposal poised to serve as a blueprint for the world.  The European Parliament will still need to vote on it early next year, but with the deal done, that's a formality, Brando Benifei, an Italian lawmaker co-leading the body's negotiating efforts, told The Associated Press late Friday.  "It's very, very good," he said by text message after being asked if it included everything he wanted. "Obviously we had to accept some compromises but overall very good."   The eventual law wouldn't fully take effect until 2025 at the earliest and threatens stiff financial penalties for violations of up to $38 million (35 million euros) or 7% of a company's global turnover.  Generative AI systems like OpenAI's ChatGPT have exploded into the world's consciousness, dazzling users with the ability to produce humanlike text, photos and songs but raising fears about the risks the rapidly developing technology poses to jobs, privacy and copyright protection, and even human life itself.  Now, the U.S., U.K., China and global coalitions like the Group of Seven major democracies have jumped in with their own proposals to regulate AI, though they're still catching up to Europe.  'A powerful example' Strong and comprehensive regulation from the EU "can set a powerful example for many governments considering regulation," said Anu Bradford, a Columbia Law School professor who's an expert on EU and digital regulation. Other countries "may not copy every provision but will likely emulate many aspects of it."  AI companies that will have to obey the EU's rules will also likely extend some of those obligations to markets outside the continent, she said. "After all, it is not efficient to retrain separate models for different markets," she said.  Others are worried that the agreement was rushed through.  "Today's political deal marks the beginning of important and necessary technical work on crucial details of the AI Act, which are still missing," said Daniel Friedlaender, head of the European office of the Computer and Communications Industry Association, a tech industry lobby group.  The AI Act was originally designed to mitigate the dangers from specific AI functions based on their level of risk, from low to unacceptable. But lawmakers pushed to expand it to foundation models, the advanced systems that underpin general purpose AI services like ChatGPT and Google's Bard chatbot.  Foundation models looked set to be one of the biggest sticking points for Europe. However, negotiators reached a tentative compromise early in the talks, despite opposition led by France, which called instead for self-regulation to help homegrown European generative AI companies competing with big U.S. rivals, including OpenAI's backer Microsoft.  Also known as large language models, these systems are trained on vast troves of written works and images scraped off the internet. They give generative AI systems the ability to create something new, unlike traditional AI, which processes data and completes tasks using predetermined rules.  Under the deal, the most advanced foundation models that pose the biggest "systemic risks" will get extra scrutiny, including requirements to disclose more information, such as how much computing power was used to train the systems.  Elevation of threats Researchers have warned that these powerful foundation models, built by a handful of big tech companies, could be used to supercharge online disinformation and manipulation, cyberattacks or creation of bioweapons.  Rights groups also caution that the lack of transparency about data used to train the models poses risks to daily life because they act as basic structures for software developers building AI-powered services.  What became the thorniest topic was AI-powered facial recognition surveillance systems, and negotiators found a compromise after intensive bargaining.  European lawmakers wanted a full ban on public use of facial scanning and other "remote biometric identification" systems because of privacy concerns, while governments of member countries wanted exemptions so law enforcement could use them to tackle serious crimes like child sexual exploitation or terrorist attacks.  Civil society groups were more skeptical.  "Whatever the victories may have been in these final negotiations, the fact remains that huge flaws will remain in this final text," said Daniel Leufer, a senior policy analyst at the digital rights group Access Now. Along with the law enforcement exemptions, he also cited a lack of protection for AI systems used in migration and border control, and "big gaps in the bans on the most dangerous AI systems." 

Pages