British-based startup ARC unveiled its first motorcycle model in Milan this week, one being described as fast, advanced and expensive. The so-called Vector costs more than $100,000, but ARC says it's for good reason. VOA Correspondent Mariama Diallo reports.
Investigative group Bellingcat and Russian website The Insider are suggesting that Russian intelligence has infiltrated the computer infrastructure of a company that processes British visa applications. The investigation, published Friday, aims to show how two suspected Russian military intelligence agents, who have been charged with poisoning a former Russian spy in the English city of Salisbury, may have obtained British visas. The Insider and Bellingcat said they interviewed the former chief technical officer of a company that processes visa applications for several consulates in Moscow, including that of Britain. The man, who fled Russia last year and applied for asylum in the United States, said he had been coerced to work with agents of the main Russian intelligence agency FSB, who revealed to him that they had access to the British visa center's CCTV cameras and had a diagram of the center's computer network. The two outlets say they have obtained the man's deposition to the U.S. authorities but have decided against publishing the man's name, for his own safety. The Insider and Bellingcat, however, did not demonstrate a clear link between the alleged efforts of Russian intelligence to penetrate the visa processing system and Alexander Mishkin and Anatoly Chepiga, who have been charged with poisoning Sergei Skripal in Salisbury in March this year. The man also said that FSB officers told him in spring 2016 that they were going to send two people to Britain and asked for his assistance with the visa applications. The timing points to the first reported trip to Britain of the two men, who traveled under the names of Alexander Petrov and Anatoly Boshirov. The man, however, said he told the FSB that there was no way he could influence the decision-making on visa applications. The man said he was coerced to sign an agreement to collaborate with the FSB after one of its officers threatened to jail his mother, and was asked to create a "backdoor" to the computer network. He said he sabotaged those efforts before he fled Russia in early 2017. In September, British intelligence released surveillance images of the agents of Russian military intelligence GRU accused of the March nerve agent attack on double agent Skripal and his daughter in Salisbury. Bellingcat and The Insider quickly exposed the agents' real names and the media, including The Associated Press, were able to corroborate their real identities. The visa application processing company, TLSContact, and the British Home Office were not immediately available for comment.
Companies could help refugees rebuild their lives by paying them to boost artificial intelligence (AI) using their phones and giving them digital skills, a tech nonprofit said Thursday. REFUNITE has developed an app, LevelApp, which is being piloted in Uganda to allow people who have been uprooted by conflict to earn instant money by “training” algorithms for AI. Wars, persecution and other violence have uprooted a record 68.5 million people, according to the U.N. refugee agency. People forced to flee their homes lose their livelihoods and struggle to create a source of income, REFUNITE co-chief executive Chris Mikkelsen told the Trust Conference in London. “This provides refugees with a foothold in the global gig economy,” he told the Thomson Reuters Foundation’s two-day event, which focuses on a host of human rights issues. $20 a day for AI work A refugee in Uganda currently earning $1.25 a day doing basic tasks or menial jobs could make up to $20 a day doing simple AI labeling work on their phones, Mikkelsen said. REFUNITE says the app could be particularly beneficial for women as the work can be done from the home and is more lucrative than traditional sources of income such as crafts. The cash could enable refugees to buy livestock, educate children and access health care, leaving them less dependant on aid and helping them recover faster, according to Mikkelsen. The work would also allow them to build digital skills they could take with them when they returned home, REFUNITE says. “This would give them the ability to rebuild a life ... and the dignity of no longer having to rely solely on charity,” Mikkelsen told the Thomson Reuters Foundation. Teaching the machines AI is the development of computer systems that can perform tasks that normally require human intelligence. It is being used in a vast array of products from driverless cars to agricultural robots that can identify and eradicate weeds and computers able to identify cancers. In order to “teach” machines to mimic human intelligence, people must repeatedly label images and other data until the algorithm can detect patterns without human intervention. REFUNITE, based in California, is testing the app in Uganda where it has launched a pilot project involving 5,000 refugees, mainly form South Sudan and Democratic Republic of Congo. It hopes to scale up to 25,000 refugees within two years. Mikkelsen said the initiative was a win-win as it would also benefit companies by slashing costs. Another tech company, DeepBrain Chain, has committed to paying 200 refugees for a test period of six months, he said.
Facebook says it is getting better at proactively removing hate speech and changing the incentives that result in the most sensational and provocative content becoming the most popular on the site. The company has done so, it says, by ramping up its operations so that computers can review and make quick decisions on large amounts of content with thousands of reviewers making more nuanced decisions. In the future, if a person disagrees with Facebook's decision, he or she will be able to appeal to an independent review board. Facebook "shouldn't be making so many important decisions about free expression and safety on our own," Facebook CEO Mark Zuckerberg said in a call with reporters Thursday. But as Zuckerberg detailed what the company has accomplished in recent months to crack down on spam, hate speech and violent content, he also acknowledged that Facebook has far to go. "There are issues you never fix," he said. "There's going to be ongoing content issues." Company's actions In the call, Zuckerberg addressed a recent story in The New York Times that detailed how the company fought back during some of its biggest controversies over the past two years, such as the revelation of how the network was used by Russian operatives in the 2016 U.S. presidential election. The Times story suggested that company executives first dismissed early concerns about foreign operatives, then tried to deflect public attention away from Facebook once the news came out. Zuckerberg said the firm made mistakes and was slow to understand the enormity of the issues it faced. "But to suggest that we didn't want to know is simply untrue," he said. Zuckerberg also said he didn't know the firm had hired Definers Public Affairs, a Washington, D.C., consulting firm that spread negative information about Facebook competitors as the social networking firm was in the midst of one scandal after another. Facebook severed its relationship with the firm. "It may be normal in Washington, but it's not the kind of thing I want Facebook associated with, which is why we won't be doing it," Zuckerberg said. The firm posted a rebuttal to the Times story. Content removed Facebook said it is getting better at proactively finding and removing content such as spam, violent posts and hate speech. The company said it removed or took other action on 15.4 million pieces of violent content between June and September of this year, about double what it removed in the prior three months. But Zuckerberg and other executives said Facebook still has more work to do in places such as Myanmar. In the third quarter, the firm said it proactively identified 63 percent of the hate speech it removed, up from 13 percent in the last quarter of 2017. At least 100 Burmese language experts are reviewing content, the firm said. One issue that continues to dog Facebook is that some of the most popular content is also the most sensational and provocative. Facebook said it now penalizes what it calls "borderline content" so it gets less distribution and engagement. "By fixing this incentive problem in our services, we believe it'll create a virtuous cycle: by reducing sensationalism of all forms, we'll create a healthier, less-polarized discourse where more people feel safe participating," Zuckerberg wrote in a post. Critics of the company, however, said Zuckerberg hasn't gone far enough to address the inherent problems of Facebook, which has 2 billion users. "We have a man-made, for-profit, simultaneous communication space, marketplace and battle space and that it is, as a result, designed not to reward veracity or morality but virality," said Peter W. Singer, strategist and senior fellow at New America, a nonpartisan think tank, at an event Thursday in Washington, D.C. VOA national security correspondent Jeff Seldin contributed to this report.
Super-realistic face masks made by a tiny company in rural Japan are in demand from the domestic tech and entertainment industries and from countries as far away as Saudi Arabia. The 300,000-yen ($2,650) masks, made of resin and plastic by five employees at REAL-f Co., attempt to accurately duplicate an individual's face down to fine wrinkles and skin texture. Company founder Osamu Kitagawa came up with the idea while working at a printing machine manufacturer. But it took him two years of experimentation before he found a way to use three-dimensional facial data from high-quality photographs to make the masks, and started selling them in 2011. The company, based in the western prefecture of Shiga, receives about 100 orders every year from entertainment, automobile, technology and security companies, mainly in Japan. For example, a Japanese car company ordered a mask of a sleeping face to improve its facial recognition technology to detect if a driver had dozed off, Kitagawa said. "I am proud that my product is helping further development of facial recognition technology," he added. "I hope that the developers would enhance face identification accuracy using these realistic masks." Kitagawa, 60, said he had also received orders from organizations linked to the Saudi government to create masks for the king and princes. "I was told the masks were for portraits to be displayed in public areas," he said. Kitagawa said he works with clients carefully to ensure his products will not be used for illicit purposes and cause security risks, but added he could not rule out such threats. He said his goal was to create 100 percent realistic masks, and he hoped to use softer materials, such as silicon, in the future. "I would like these masks to be used for medical purposes, which is possible once they can be made using soft materials," he said. "And as humanoid robots are being developed, I hope this will help developers to create [more realistic robots] at a low cost."
Office workers often complain that the building is either too hot or too cold. Now, engineers and architects are working on creating "sentient buildings" that can cater to the personal needs and well being of each employee in the hopes of increasing productivity. VOA'S Elizabeth Lee has this report from Los Angeles.
China’s state-run Xinhua News has debuted what it called the world’s first artificial intelligence (AI) anchor. But the novelty has generated more dislikes than likes online among Chinese netizens, with many calling the new virtual host “a news-reading device without a soul.” Analysts say the latest creation has showcased China’s short-term progress in voice recognition, text mining and semantic analysis, but challenges remain ahead for its long-term ambition of becoming an AI superpower by 2030. Nonhuman anchors Collaborating with Chinese search engine Sogou, Xinhua introduced two AI anchors, one for English broadcasts and the other for Chinese, both of which are based on images of the agency’s real newscasters, Zhang Zhao and Qiu Hao respectively. In its inaugural broadcast last week, the English-speaking anchor was more tech cheerleader than newshound, rattling off lines few anchors would be caught dead reading, such as: “the development of the media industry calls for continuous innovation and deep integration with the international advanced technologies.” It also promised “to work tirelessly to keep you [audience] informed as texts will be typed into my system uninterrupted” 24/7 across multiple platforms simultaneously if necessary, according to the news agency. No soul Local audiences appear to be unimpressed, critiquing the news bots’ not so human touch and synthesized voices. On Weibo, China’s Twitterlike microblogging platform, more than one user wrote that such anchors have “no soul,” in response to Xinhua’s announcement. And one user joked: “what if we have an AI [country] leader?” while another questioned what it stands for in terms of journalistic values by saying “What a nutcase. Fake news is on every day.” Others pondered the implication AI news bots might have on employment and workers. “It all comes down to production costs, which will determine if [we] lose jobs,” one Weibo user wrote. Some argued that only low-end labor-intensive jobs will be easily replaced by intelligent robots while others gloated about the possibility of employers utilizing an army of low-cost robots to make a fortune. A simple use case Industry experts said the digital anchor system is based on images of real people and possibly animated parts of their mouths and faces, with machine-learning technology recreating humanlike speech patterns and facial movements. It then uses a synthesized voice for the delivery of the news broadcast. The creation showcases China’s progress in voice recognition, text mining and semantic analysis, all of which is covered by natural language processing, according to Liu Chien-chih, secretary-general of Asia IoT Alliance (AIOTA). But that’s just one of many aspects of AI technologies, he wrote in an email to VOA. Given the pace of experimental AI adoption by Chinese businesses, more user scenarios or designs of user interface can be anticipated in China, Liu added. Chris Dong, director of China research at the market intelligence firm IDC, agreed the digital anchor is as simple as what he calls a “use case” for AI-powered services to attract commercials and audiences. He said, in an email to VOA, that China has fast-tracked its big data advantage around consumers or internet of things (IoT) infrastructure to add commercial value. Artificial Intelligence has also allowed China to accelerate its digital transformation across various industries or value chains, which are made smarter and more efficient, Dong added. Far from a threat to the US But both said China is far from a threat to challenge U.S. leadership on AI given its lack of an open market and respect for intellectual property rights (IPRs) as well as its lagging innovative competency on core AI technologies. Earlier, Lee Kai-fu, a well-known venture capitalist who led Google before it pulled out of China, was quoted by news website Tech Crunch as saying that the United States may have created Artificial Intelligence, but China is taking the ball and running with it when it comes to one of the world’s most pivotal technology innovations. Lee summed up four major drivers behind his observation that China is beating the United States in AI: abundant data, hungry entrepreneurs, growing AI expertise and massive government support and funding. Beijing has set a goal to become an AI superpower by 2030, and to turn the sector into a $150 billion industry. Yet, IDC’s Dong cast doubts on AI’s adoption rate and effectiveness in China’s traditional sectors. Some, such as the manufacturing sector, is worsening, he said. He said China’s “state capitalism may have its short-term efficiency and gain, but over the longer-term, it is the open market that is fundamental to building an effective innovation ecosystem.” The analyst urges China to open up and include multinational software and services to contribute to its digital economic transformation. “China’s ‘Made-in-China 2025’ should go back to the original flavor … no longer Made and Controlled by Chinese, but more [of] an Open Platform of Made-in-China that both local and foreign players have a level-playing field,” he said. In addition to a significant gap in core technologies, China’s failure to uphold IPRs will go against its future development of AI software, “which is often sold many-fold in the U.S. than in China as the Chinese tend to think intangible assets are free,” AIOTA’s Liu said.
New web and phone apps in India are helping women stay safe in public spaces by making it easier for them to report harassment and get help, developers say. Women are increasingly turning to technology to stay safe in public spaces, which in turn helps the police to map “harassment prone” spots — from dimly lit roads to bus routes and street corners. Safety is the biggest concern for women using public and private transport, according to a Thomson Reuters Foundation survey released Thursday, as improving city access for women becomes a major focus globally. “Women always strategize on how to access public spaces, from how to dress to what mode of transport to take, timings and whether they should travel alone or in a group,” said Sameera Khan, columnist and co-author of “Why Loiter? Women And Risk On Mumbai Streets.” Reported crimes up 80 percent Indian government data shows reported cases of crime against women rose by more than 80 percent between 2007 and 2016. The fatal gang rape of a young woman on a bus in New Delhi in 2012 put the spotlight on the dangers women face in India’s public spaces. The incident spurred Supreet Singh of charity Red Dot Foundation to create the SafeCity app that encourages women across 11 Indian cities to report harassment and flag hotspots. “We want to bridge the gap between the ground reality of harassment in public spaces and what is actually being reported,” said Singh, a speaker at the Thomson Reuters Foundation’s annual Trust Conference on Thursday. The aim is to take the spotlight off the victim and focus on the areas where crimes are committed so action can be taken. Dimly lit lanes, crowded public transport, paths leading to community toilets, basements, parking lots and parks are places where Indian women feel most vulnerable, campaigners say. Stigma attached to sexual harassment and an insensitive police reporting mechanism result in many cases going unreported, rights campaigners say. Apps are promising But apps like SafeCity, My Safetipin and Himmat (courage) promise anonymity to women reporting crimes and share data collected through the app with government agencies such as the police, municipal corporations and the transport department. “The data has helped in many small ways,” said Singh of the Red Dot Foundation. “From getting the police to increase patrolling in an area prone to ‘eve-teasing’ to getting authorities to increase street lighting in dark alleys, the app is bringing change.” Police in many Indian cities, including New Delhi, Gurgaon and Chandigarh, are also encouraging women to use apps to register complaints, promising prompt action. “Safety apps are another such strategy that could be applied by women but I worry that by giving these apps, everyone else, most importantly the state, should not abdicate its responsibility towards public safety,” Khan said.
Democratic U.S. Representative David Cicilline, expected to become the next chairman of House Judiciary Committee's antitrust panel, said on Wednesday that Facebook cannot be trusted to regulate itself and Congress should take action. Cicilline, citing a report in the New York Times on Facebook's efforts to deal with a series of crises, said on Twitter: "This staggering report makes clear that @Facebook executives will always put their massive profits ahead of the interests of their customers." "It is long past time for us to take action," he said. Facebook did not immediately respond to a request for comment. Facebook Chief Executive Mark Zuckerberg said a year ago that the company would put its "community" before profit, and it has doubled its staff focused on safety and security issues since then. Spending also has increased on developing automated tools to catch propaganda and material that violates the company's posting policies. Other initiatives have brought increased transparency about the administrators of pages and purchasers of ads on Facebook. Some critics, including lawmakers and users, still contend that Facebook's bolstered systems and processes are prone to errors and that only laws will result in better performance. The New York Times said Zuckerberg and the company's chief operating officer, Sheryl Sandberg, ignored warning signs that the social media company could be "exploited to disrupt elections, broadcast viral propaganda and inspire deadly campaigns of hate around the globe." And when the warning signs became evident, they "sought to conceal them from public view." "We've known for some time that @Facebook chose to turn a blind eye to the spread of hate speech and Russian propaganda on its platform," said Cicilline, who will likely take the reins of the subcommittee on regulatory reform, commercial and antitrust law when the new, Democratic-controlled Congress is seated in January. "Now we know that once they knew the truth, top @Facebook executives did everything they could to hide it from the public by using a playbook of suppressing opposition and propagating conspiracy theories," he said. "Next January, Congress should get to work enacting new laws to hold concentrated economic power to account, address the corrupting influence of corporate money in our democracy, and restore the rights of Americans," Cicilline said.
The Federal Communications Commission on Wednesday launched the agency's first high-band 5G spectrum auction as it works to clear space for next-generation faster networks. Bidding began Wednesday on spectrum in the 28 GHz band and will be followed by bidding for spectrum in the 24 GHz band. The FCC is making 1.55 gigahertz of spectrum available and the auctions will be followed by a 2019 auction of three more millimeter-wave spectrum bands — 37 GHz, 39 GHz and 47 GHz. "These airwaves will be critical in deploying 5G services and applications," FCC Chairman Ajit Pai said Wednesday. 5G networks are expected to be at least 100 times faster than current 4G networks and cut latency, or delays, to less than one-thousandth of a second from one-hundredth of a second in 4G. They also will allow for innovations in a number of different fields. While millimeter-wave spectrum offers faster speeds, it cannot cover big geographic areas and will require significant new small cell infrastructure deployments. FCC Commissioner Brendan Carr said the spectrum being auctioned would allow for "faster broadband to autonomous cars, from smart [agriculture] to telehealth." The spectrum being auctioned over the next 15 months "is more spectrum than is currently used for terrestrial mobile broadband by all wireless service providers combined," the FCC said. Democratic FCC Commissioner Jessica Rosenworcel said the United States was following "the lead of South Korea, the United Kingdom, Spain, Italy, Ireland and Australia. But we put ourselves back in the running for next-generation wireless leadership," and she called on the FCC to clearly state the timing for future spectrum auctions. Last month, U.S. President Donald Trump signed a presidential memorandum directing the Commerce Department to develop a long-term comprehensive national spectrum strategy to prepare for the introduction of 5G. Trump is also creating a White House Spectrum Strategy Task Force and wants federal agencies to report on government spectrum needs and review how spectrum can be shared with private sector users. AT&T, Verizon Communications, Sprint and T-Mobile U.S. are working to acquire spectrum and are developing and testing 5G networks. The first 5G-compatible commercial cellphones are expected to go on sale next year.
The online sale of sex slaves is going strong despite new U.S. laws to clamp down on the crime, data analysts said Wednesday, urging a wider use of technology to fight human trafficking. In April, the United States passed legislation aimed at making it easier to prosecute social media platforms and websites that facilitate sex trafficking, days after a crackdown on classified ad giant Backpage.com. The law resulted in an immediate and sharp drop in sex ads online but numbers have since picked up again, data presented at the Thomson Reuters Foundation's annual Trust Conference showed. "The market has been destabilized and there are now new entrants that are willing to take the risk in order to make money," Chris White, a researcher at tech giant Microsoft who gathered the data, told the event in London. New players Backpage.com, a massive advertising site primarily used to sell sex — which some analysts believe accounted for 80 percent of online sex trafficking in the United States — was shut down by federal authorities in April. Days later, the Fight Online Sex Trafficking Act (FOSTA), which introduced stiff prison sentences and fines for website owners and operators found guilty of contributing to sex trafficking, was passed into law. The combined action caused the number of online sex ads to fall 80 percent to about 20,000 a day nationwide, White said. The number of ads has since risen to about 60,000 a day, as new websites filled the gap, he said. In October — in response to a lawsuit accusing it of not doing enough to protect users from human traffickers — social media giant Facebook said it worked internally and externally to thwart such predators. Using technology to continuously monitor and analyze this kind of data is key to evaluating existing laws and designing new and more effective ones, White said. "It really highlights what's possible through policy," added Valiant Richey, a former U.S. prosecutor who now fights human trafficking at the Organization for Security and Co-operation in Europe (OSCE), echoing the calls for new methods. Law enforcement agencies currently tackle slavery one case at a time, but the approach lacks as the crime is too widespread and authorities are short of resources, he said. As a prosecutor in Seattle, Richey said his office would work on up to 80 cases a year, while online searches revealed more than 100 websites where sex was sold in the area, some carrying an average of 35,000 ads every month. "We were fighting forest fire with a garden hose," he said. "A case-based response to human trafficking will not on its own carry the day." At least 40 million people are victims of modern slavery worldwide — with nearly 25 million trapped in forced labor and about 15 million in forced marriages.
Bitcoin fell to a more than one-year low on Wednesday, breaching a key support level of $6,000 and causing a wave of selling in the digital currency and other crypto assets in what has been a prolonged market slump that began early this year. Bitcoin fell to as low as $5,533.09 on the Bitstamp platform. It was down 9 percent at $5,690.47. "For the last few days you could see the consolidation happening and the price was moving on the downside," said Naeem Aslam, analyst at ThinkMarkets, a multi-asset online brokerage. "The break of $6,200 yesterday gave a fair indication that there are no buyers on the sidelines at this point," he added. Bitcoin's weakness spread to other cryptocurrencies, with ethereum, the second-largest, dropping to a two-month low. Ethereum was last down 10 percent at $182.41. Wednesday's sell-off in cryptocurrencies pushed the sector's market capitalization to under $200 billion for the first time since around mid-September, according to data from industry data tracker coinmarketcap.com. "What you are seeing... is a breakout on the downside. Sometimes when things happen, it takes a while for the true reason to become clear - an exchange trade or regulatory action," said Charlie Hayter, founder of industry website Cryptocompare in London. Other market participants suggested that Thursday's impending "hard fork," or split of bitcoin cash - another cryptocurrency that emerged out of bitcoin - into two separate currencies, has caused some volatility as well. Twice a year, bitcoin cash undergoes scheduled protocol upgrades, which include splitting its network. "For our trading activities, the hard fork recently has generated tremendous interest and trading volume, above 4 billion daily, among traders," said Ricky Li, co-founder of crypto trading and advisory firm Altonomy. Overall, analysts said the outlook for bitcoin remains unclear, with longer-term forecasts dependent on the virtual currency becoming a reliable store of value or a viable payment mechanism. However, there are growing signs of greater institutional participation in bitcoin, such as increased demand for a bitcoin exchange traded fund and rising bitcoin futures volume, analysts said. But they noted that actual participation remains low among both institutional and retail investors.
Robots with rigid metal frames are being used to help the paralyzed walk and have applications that could one day grant military fighters extra power on the battlefield. The problem is that they're uncomfortable and heavy. But researchers at Harvard University are working on lighter, flexible devices that move easily and don't weigh much. VOA's Kevin Enochs reports.
Nigeria's Main One Cable took responsibility Tuesday for a glitch that temporarily caused some Google global traffic to be misrouted through China, saying it accidentally caused the problem during a network upgrade. The issue surfaced Monday afternoon as internet monitoring firms ThousandEyes and BGPmon said some traffic to Alphabet's Google had been routed through China and Russia, raising concerns that the communications had been intentionally hijacked. Main One said in an email that it had caused a 74-minute glitch by misconfiguring a border gateway protocol filter used to route traffic across the internet. That resulted in some Google traffic being sent through Main One partner China Telecom, the West African firm said. Google has said little about the matter. It acknowledged the problem Monday in a post on its website that said it was investigating the glitch and that it believed the problem originated outside the company. The company did not say how many users were affected or identify specific customers. Google representatives could not be reached Tuesday to comment on Main One's statement. Hacking concerns Even though Main One said it was to blame, some security experts said the incident highlighted concerns about the potential for hackers to conduct espionage or disrupt communications by exploiting known vulnerabilities in the way traffic is routed over the internet. The U.S. China Economic and Security Review Commission, a Washington group that advises the U.S. Congress on security issues, plans to investigate the issue, said Commissioner Michael Wessel. "We will work to gain more facts about what has happened recently and look at what legal tools or legislation or law enforcement activities can help address this problem," Wessel said. Glitches in border gateway protocol filters have caused multiple outages to date, including cases in which traffic from U.S. internet and financial services firms was routed through Russia, China and Belarus. Yuval Shavitt, a network security researcher at Tel Aviv University, said it was possible that Monday's issue was not an accident. "You can always claim that this is some kind of configuration error," said Shavitt, who last month co-authored a paper alleging that the Chinese government had conducted a series of internet hijacks. Main One, which describes itself as a leading provider of telecom and network services for businesses in West Africa, said that it had investigated the matter and implemented new processes to prevent it from happening again.
NATO is developing new high-tech tools, such as the ability to 3-D-print parts for weapons and deliver them by drone, as it scrambles to retain a competitive edge over Russia, China and other would-be battlefield adversaries. Gen. Andre Lanata, who took over as head of the NATO transformation command in September, told a conference in Berlin that his command demonstrated over 21 "disruptive" projects during military exercises in Norway this month. He urged startups as well as traditional arms manufacturers to work with the Atlantic alliance to boost innovation, as rapid and easy access to emerging technologies was helping adversaries narrow NATO's long-standing advantage. Lanata's command hosted its third "innovation challenge" in tandem with the conference this week, where 10 startups and smaller firms presented ideas for defeating swarms of drones on the ground and in the air. Winner from Belgium Belgian firm ALX Systems, which builds civilian surveillance drones, won this year's challenge. Its CEO, Geoffrey Mormal, said small companies like his often struggled with cumbersome weapons procurement processes. "It's a very hot topic, so perhaps it will help to enable quicker decisions," he told Reuters. Lanata said NATO was focused on areas such as artificial intelligence, connectivity, quantum computing, big data and hypervelocity, but also wants to learn from DHL and others how to improve the logistics of moving weapons and troops. NATO Secretary-General Jens Stoltenberg said increasing military spending by NATO members would help tackle some of the challenges, but efforts were also needed to reduce widespread duplication and fragmentation in the European defense sector. Participants also met behind closed doors with chief executives from 12 of the 15 biggest arms makers in Europe.
Facebook said Tuesday it had been unable to determine who was behind dozens of fake accounts it took down shortly before the 2018 U.S. midterm elections. “Combined with our takedown last Monday, in total we have removed 36 Facebook accounts, 6 Pages, and 99 Instagram accounts for coordinated inauthentic behavior,” Nathaniel Gleicher, head of cybersecurity policy, wrote on the company’s blog. At least one of the Instagram accounts had well over a million followers, according to Facebook. A website that said it represented the Russian state-sponsored Internet Research Agency claimed responsibility for the accounts last week, but Facebook said it did not have enough information to connect the agency that has been called a troll farm. “As multiple independent experts have pointed out, trolls have an incentive to claim that their activities are more widespread and influential than may be the case,” Gleicher wrote. Sample images provided by Facebook showed posts on a wide range of issues. Some advocated on behalf of social issues such as women’s rights and LGBT pride, while others appeared to be conservative users voicing support for President Donald Trump. The viewpoints on display potentially fall in line with a Russian tactic identified in other cases of falsified accounts. A recent analysis of millions of tweets by the Atlantic Council found that Russian trolls often pose as members on either side of contentious issues in order to maximize division in the United States.
If you're really lucky and live in the U.S. cities of Houston, Indianapolis, Los Angeles or Sacramento, you now have access to a 5G network. If you live anywhere else, just be patient... a 5G mobile network is coming your way, and it's already arriving in some countries. VOA's Kevin Enochs reports.
German states have drafted a list of demands aimed at tightening a law that requires social media companies like Facebook and Twitter to remove hate speech from their sites, the Handelblatt newspaper reported Monday. Justice ministers from the states will submit their proposed revisions to the German law called NetzDG at a meeting with Justice Minister Katarina Barley on Thursday, the newspaper said, saying it had obtained a draft of the document. The law, which came into full force on Jan. 1, is a highly ambitious effort to control what appears on social media and it has drawn a range of criticism. While the German states are focused on concerns about how complaints are processed, other officials have called for changes following criticism that too much content was being blocked. The states' justice ministers are calling for changes that would make it easier for people who want to complain about banned content such as pro-Nazi ideology to find the required forms on social media platforms. They also want to fine social media companies up to 500,000 euros ($560,950) for providing "meaningless replies" to queries from law enforcement authorities, the newspaper said. Till Steffen, the top justice official in Hamburg and a member of the Greens party, told the newspaper that the law had in some cases proven to be "a paper tiger." "If we want to effectively limit hate and incitement on the internet, we have to give the law more bite and close the loopholes," he told the paper. "For instance, it cannot be the case that some platforms hide their complaint forms so that no one can find them." Facebook in July said it had deleted hundreds of offensive posts since implementation of the law, which foresees fines of up to 50 million euros ($56.10 million) for failure to comply.
Facebook will allow French regulators to "embed" inside the company to examine how it combats online hate speech, the first time the wary tech giant has opened its doors in such a way, President Emmanuel Macron said Monday. From January, Macron's administration will send a small team of senior civil servants to the company for six months to verify Facebook's goodwill and determine whether its checks on racist, sexist or hate-fueled speech could be improved. "It's a first," Macron told the annual Internet Governance Forum in Paris. "I'm delighted by this very innovative experimental approach," he said. "It's an experiment, but a very important first step in my view." The trial project is an example of what Macron has called "smart regulation," something he wants to extend to other tech leaders such as Google, Apple and Amazon. The move follows a meeting with Facebook's founder Mark Zuckerberg in May, when Macron invited the CEOs of some of the biggest tech firms to Paris, telling them they should work for the common good. The officials may be seconded from the telecoms regulator and the interior and justice ministries, a government source said. Facebook said the selection was up to the French presidency. It is unclear whether the group will have access to highly-sensitive material such as Facebook's algorithms or codes to remove hate speech. It could travel to Facebook's European headquarters in Dublin and global base in Menlo Park, California, if necessary, the company said. "The best way to ensure that any regulation is smart and works for people is by governments, regulators and businesses working together to learn from each other and explore ideas," Nick Clegg, the former British deputy prime minister who is now head of Facebook's global affairs, said in a statement. France's approach to hate speech has contrasted sharply with Germany, Europe's leading advocate of privacy. Since January, Berlin has required sites to remove banned content within 24 hours or face fines of up to 50 million euros ($56 million). That has led to accusations of censorship. France's use of embedded regulators is modeled on what happens in its banking and nuclear industries. "[Tech companies] now have the choice between something that is smart but intrusive and regulation that is wicked and plain stupid," a French official said.
France and U.S. technology giants including Microsoft on Monday urged world governments and companies to sign up to a new initiative to regulate the internet and fight threats such as cyberattacks, online censorship and hate speech. With the launch of a declaration entitled the 'Paris call for trust and security in cyberspace', French President Emmanuel Macron is hoping to revive efforts to regulate cyberspace after the last round of United Nations negotiations failed in 2017. In the document, which is supported by many European countries but, crucially, not China or Russia, the signatories urge governments to beef up protections against cyber meddling in elections and prevent the theft of trade secrets. The Paris call was initially pushed for by tech companies but was redrafted by French officials to include work done by U.N. experts in recent years. "The internet is a space currently managed by a technical community of private players. But it's not governed. So now that half of humanity is online, we need to find new ways to organize the internet," an official from Macron's office said. "Otherwise, the internet as we know it today - free, open and secure-- will be damaged by the new threats.” By launching the initiative a day after a weekend of commemorations marking the 100th anniversary of World War I, Macron hopes to promote his push for stronger global cooperation in the face of rising nationalism. In another sign of the Trump administration's reluctance to join international initiatives it sees as a bid to encroach on U.S. sovereignty, French officials said Washington might not become a signatory, though talks are continuing. However, they said large U.S. tech companies including Facebook and Alphabet's Google would sign up. "The American ecosystem is very involved. It doesn't mean that in the end the U.S. federal government won't join us, talks are continuing, but the U.S. will be involved under other forms," another French official said.