• Can Humanity Survive AI ?
    https://jacobin.com/2024/01/can-humanity-survive-ai

    La question est l’expression des intérêts d’une partie des habitants de la bulle californienne. Elle est quand même intéressante car elle tourne autour des idées dangereuses de quelques très riches politiciens et entrepreneurs. Mettez vos centures avant de commencer le tour des montagnes russes de cet article.

    22.1.2024 by Garrison Lovely - With the development of artificial intelligence racing forward at warp speed, some of the richest men in the world may be deciding the fate of humanity right now.

    Google cofounder Larry Page thinks superintelligent AI is “just the next step in evolution.” In fact, Page, who’s worth about $120 billion, has reportedly argued that efforts to prevent AI-driven extinction and protect human consciousness are “speciesist” and “sentimental nonsense.”

    In July, former Google DeepMind senior scientist Richard Sutton — one of the pioneers of reinforcement learning, a major subfield of AI — said that the technology “could displace us from existence,” and that “we should not resist succession.” In a 2015 talk, Sutton said, suppose “everything fails” and AI “kill[s] us all”; he asked, “Is it so bad that humans are not the final form of intelligent life in the universe?”

    “Biological extinction, that’s not the point,” Sutton, sixty-six, told me. “The light of humanity and our understanding, our intelligence — our consciousness, if you will — can go on without meat humans.”

    Yoshua Bengio, fifty-nine, is the second-most cited living scientist, noted for his foundational work on deep learning. Responding to Page and Sutton, Bengio told me, “What they want, I think it’s playing dice with humanity’s future. I personally think this should be criminalized.” A bit surprised, I asked what exactly he wanted outlawed, and he said efforts to build “AI systems that could overpower us and have their own self-interest by design.” In May, Bengio began writing and speaking about how advanced AI systems might go rogue and pose an extinction risk to humanity.

    Bengio posits that future, genuinely human-level AI systems could improve their own capabilities, functionally creating a new, more intelligent species. Humanity has driven hundreds of other species extinct, largely by accident. He fears that we could be next — and he isn’t alone.

    Bengio shared the 2018 Turing Award, computing’s Nobel Prize, with fellow deep learning pioneers Yann LeCun and Geoffrey Hinton. Hinton, the most cited living scientist, made waves in May when he resigned from his senior role at Google to more freely sound off about the possibility that future AI systems could wipe out humanity. Hinton and Bengio are the two most prominent AI researchers to join the “x-risk” community. Sometimes referred to as AI safety advocates or doomers, this loose-knit group worries that AI poses an existential risk to humanity.

    In the same month that Hinton resigned from Google, hundreds of AI researchers and notable figures signed an open letter stating, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Hinton and Bengio were the lead signatories, followed by OpenAI CEO Sam Altman and the heads of other top AI labs.

    Hinton and Bengio were also the first authors of an October position paper warning about the risk of “an irreversible loss of human control over autonomous AI systems,” joined by famous academics like Nobel laureate Daniel Kahneman and Sapiens author Yuval Noah Harari.

    LeCun, who runs AI at Meta, agrees that human-level AI is coming but said in a public debate against Bengio on AI extinction, “If it’s dangerous, we won’t build it.”

    Deep learning powers the most advanced AI systems in the world, from DeepMind’s protein-folding model to large language models (LLMs) like OpenAI’s ChatGPT. No one really understands how deep learning systems work, but their performance has continued to improve nonetheless. These systems aren’t designed to function according to a set of well-understood principles but are instead “trained” to analyze patterns in large datasets, with complex behavior — like language understanding — emerging as a consequence. AI developer Connor Leahy told me, “It’s more like we’re poking something in a Petri dish” than writing a piece of code. The October position paper warns that “no one currently knows how to reliably align AI behavior with complex values.”

    In spite of all this uncertainty, AI companies see themselves as being in a race to make these systems as powerful as they can — without a workable plan to understand how the things they’re creating actually function, all while cutting corners on safety to win more market share. Artificial general intelligence (AGI) is the holy grail that leading AI labs are explicitly working toward. AGI is often defined as a system that is at least as good as humans at almost any intellectual task. It’s also the thing that Bengio and Hinton believe could lead to the end of humanity.

    Bizarrely, many of the people actively advancing AI capabilities think there’s a significant chance that doing so will ultimately cause the apocalypse. A 2022 survey of machine learning researchers found that nearly half of them thought there was at least a 10 percent chance advanced AI could lead to “human extinction or [a] similarly permanent and severe disempowerment” of humanity. Just months before he cofounded OpenAI, Altman said, “AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.”

    Public opinion on AI has soured, particularly in the year since ChatGPT was released. In all but one 2023 survey, more Americans than not have thought that AI could pose an existential threat to humanity. In the rare instances when pollsters asked people if they wanted human-level or beyond AI, strong majorities in the United States and the UK said they didn’t.

    So far, when socialists weigh in on AI, it’s usually to highlight AI-powered discrimination or to warn about the potentially negative impact of automation in a world of weak unions and powerful capitalists. But the Left has been conspicuously quiet about Hinton and Bengio’s nightmare scenario — that advanced AI could kill us all.
    Worrying Capabilities
    Illustration by Ricardo Santos

    While much of the attention from the x-risk community focuses on the idea that humanity could eventually lose control of AI, many are also worried about less capable systems empowering bad actors on very short timelines.

    Thankfully, it’s hard to make a bioweapon. But that might change soon.

    Anthropic, a leading AI lab founded by safety-forward ex-OpenAI staff, recently worked with biosecurity experts to see how much an LLM could help an aspiring bioterrorist. Testifying before a Senate subcommittee in July, Anthropic CEO Dario Amodei reported that certain steps in bioweapons production can’t be found in textbooks or search engines, but that “today’s AI tools can fill in some of these steps, albeit incompletely,” and that “a straightforward extrapolation of today’s systems to those we expect to see in two to three years suggests a substantial risk that AI systems will be able to fill in all the missing pieces.”

    In October, New Scientist reported that Ukraine made the first battlefield use of lethal autonomous weapons (LAWs) — literally killer robots. The United States, China, and Israel are developing their own LAWs. Russia has joined the United States and Israel in opposing new international law on LAWs.

    However, the more expansive idea that AI poses an existential risk has many critics, and the roiling AI discourse is hard to parse: equally credentialed people make opposite claims about whether AI x-risk is real, and venture capitalists are signing open letters with progressive AI ethicists. And while the x-risk idea seems to be gaining ground the fastest, a major publication runs an essay seemingly every week arguing that x-risk distracts from existing harms. Meanwhile, orders of magnitude more money and people are quietly dedicated to making AI systems more powerful than to making them safer or less biased.

    Some fear not the “sci-fi” scenario where AI models get so capable they wrest control from our feeble grasp, but instead that we will entrust biased, brittle, and confabulating systems with too much responsibility, opening a more pedestrian Pandora’s box full of awful but familiar problems that scale with the algorithms causing them. This community of researchers and advocates — often labeled “AI ethics” — tends to focus on the immediate harms being wrought by AI, exploring solutions involving model accountability, algorithmic transparency, and machine learning fairness.

    I spoke with some of the most prominent voices from the AI ethics community, like computer scientists Joy Buolamwini, thirty-three, and Inioluwa Deborah Raji, twenty-seven. Each has conducted pathbreaking research into existing harms caused by discriminatory and flawed AI models whose impacts, in their view, are obscured one day and overhyped the next. Like that of many AI ethics researchers, their work blends science and activism.

    Those I spoke to within the AI ethics world largely expressed a view that, rather than facing fundamentally new challenges like the prospect of complete technological unemployment or extinction, the future of AI looks more like intensified racial discrimination in incarceration and loan decisions, the Amazon warehouse-ification of workplaces, attacks on the working poor, and a further entrenched and enriched techno-elite.
    Illustration by Ricardo Santos

    A frequent argument from this crowd is that the extinction narrative overhypes the capabilities of Big Tech’s products and dangerously “distracts” from AI’s immediate harms. At best, they say, entertaining the x-risk idea is a waste of time and money. At worst, it leads to disastrous policy ideas.

    But many of the x-risk believers highlighted that the positions “AI causes harm now” and “AI could end the world” are not mutually exclusive. Some researchers have tried explicitly to bridge the divide between those focused on existing harms and those focused on extinction, highlighting potential shared policy goals. AI professor Sam Bowman, another person whose name is on the extinction letter, has done research to reveal and reduce algorithmic bias and reviews submissions to the main AI ethics conference. Simultaneously, Bowman has called for more researchers to work on AI safety and wrote of the “dangers of underclaiming” the abilities of LLMs.

    The x-risk community commonly invokes climate advocacy as an analogy, asking whether the focus on reducing the long-term harms of climate change dangerously distracts from the near-term harms from air pollution and oil spills.

    But by their own admission, not everyone from the x-risk side has been as diplomatic. In an August 2022 thread of spicy AI policy takes, Anthropic cofounder Jack Clark tweeted that “Some people who work on long-term/AGI-style policy tend to ignore, minimize, or just not consider the immediate problems of AI deployment/harms.”
    “AI Will Save the World”

    A third camp worries that when it comes to AI, we’re not actually moving fast enough. Prominent capitalists like billionaire Marc Andreessen agree with safety folks that AGI is possible but argue that, rather than killing us all, it will usher in an indefinite golden age of radical abundance and borderline magical technologies. This group, largely coming from Silicon Valley and commonly referred to as AI boosters, tends to worry far more that regulatory overreaction to AI will smother a transformative, world-saving technology in its crib, dooming humanity to economic stagnation.

    Some techno-optimists envision an AI-powered utopia that makes Karl Marx seem unimaginative. The Guardian recently released a mini-documentary featuring interviews from 2016 through 2019 with OpenAI’s chief scientist, Ilya Sutskever, who boldly pronounces: “AI will solve all the problems that we have today. It will solve employment, it will solve disease, it will solve poverty. But it will also create new problems.”

    Andreessen is with Sutskever — right up until the “but.” In June, Andreessen published an essay called “Why AI Will Save the World,” where he explains how AI will make “everything we care about better,” as long as we don’t regulate it to death. He followed it up in October with his “Techno-Optimist Manifesto,” which, in addition to praising a founder of Italian fascism, named as enemies of progress ideas like “existential risk,” “sustainability,” “trust and safety,” and “tech ethics.” Andreessen does not mince words, writing, “We believe any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing [are] a form of murder.”

    Andreessen, along with “pharma bro” Martin Shkreli, is perhaps the most famous proponent of “effective accelerationism,” also called “e/acc,” a mostly online network that mixes cultish scientism, hypercapitalism, and the naturalistic fallacy. E/acc, which went viral this summer, builds on reactionary writer Nick Land’s theory of accelerationism, which argues that we need to intensify capitalism to propel ourselves into a posthuman, AI-powered future. E/acc takes this idea and adds a layer of physics and memes, mainstreaming it for a certain subset of Silicon Valley elites. It was formed in reaction to calls from “decels” to slow down AI, which have come significantly from the effective altruism (EA) community, from which e/acc takes its name.

    AI booster Richard Sutton — the scientist ready to say his goodbyes to “meat humans” — is now working at Keen AGI, a new start-up from John Carmack, the legendary programmer behind the 1990s video game Doom. The company mission, according to Carmack: “AGI or bust, by way of Mad Science!”
    Capitalism Makes It Worse

    In February, Sam Altman tweeted that Eliezer Yudkowsky might eventually “deserve the Nobel Peace Prize.” Why? Because Altman thought the autodidactic researcher and Harry Potter fan-fiction author had done “more to accelerate AGI than anyone else.” Altman cited how Yudkowsky helped DeepMind secure pivotal early-stage funding from Peter Thiel as well as Yudkowsky’s “critical” role “in the decision to start OpenAI.”

    Yudkowsky was an accelerationist before the term was even coined. At the age of seventeen — fed up with dictatorships, world hunger, and even death itself — he published a manifesto demanding the creation of a digital superintelligence to “solve” all of humanity’s problems. Over the next decade of his life, his “technophilia” turned to phobia, and in 2008 he wrote about his conversion story, admitting that “to say, I almost destroyed the world!, would have been too prideful.”

    Yudkowsky is now famous for popularizing the idea that AGI could kill everyone, and he has become the doomiest of the AI doomers. A generation of techies grew up reading Yudkowsky’s blog posts, but more of them (perhaps most consequentially, Altman) internalized his arguments that AGI would be the most important thing ever than his beliefs about how hard it would be to get it not to kill us.

    During our conversation, Yudkowsky compared AI to a machine that “prints gold,” right up until it “ignite[s] the atmosphere.”

    And whether or not it will ignite the atmosphere, that machine is printing gold faster than ever. The “generative AI” boom is making some people very, very rich. Since 2019, Microsoft has invested a cumulative $13 billion into OpenAI. Buoyed by the wild success of ChatGPT, Microsoft gained nearly $1 trillion in value in the year following the product’s release. Today the nearly fifty-year-old corporation is worth more than Google and Meta combined.

    Profit-maximizing actors will continue barreling forward, externalizing risks the rest of us never agreed to bear, in the pursuit of riches — or simply the glory of creating digital superintelligence, which Sutton tweeted “will be the greatest intellectual achievement of all time … whose significance is beyond humanity, beyond life, beyond good and bad.” Market pressures will likely push companies to transfer more and more power and autonomy to AI systems as they improve.

    One Google AI researcher wrote to me, “I think big corps are in such a rush to win market share that [AI] safety is seen as a kind of silly distraction.” Bengio told me he sees “a dangerous race between companies” that could get even worse.

    Panicking in response to the OpenAI-powered Bing search engine, Google declared a “code red,” “recalibrate[d]” their risk appetite, and rushed to release Bard, their LLM, over staff opposition. In internal discussions, employees called Bard “a pathological liar” and “cringe-worthy.” Google published it anyway.

    Dan Hendrycks, the director of the Center for AI Safety, said that “cutting corners on safety . . . is largely what AI development is driven by. . . . I don’t think, actually, in the presence of these intense competitive pressures, that intentions particularly matter.” Ironically, Hendrycks is also the safety adviser to xAI, Elon Musk’s latest venture.

    The three leading AI labs all began as independent, mission-driven organizations, but they are now either full subsidiaries of tech behemoths (Google DeepMind) or have taken on so many billions of dollars in investment from trillion-dollar companies that their altruistic missions may get subsumed by the endless quest for shareholder value (Anthropic has taken up to $6 billion from Google and Amazon combined, and Microsoft’s $13 billion bought them 49 percent of OpenAI’s for-profit arm). The New York Times recently reported that DeepMind’s founders became “increasingly worried about what Google would do with their inventions. In 2017, they tried to break away from the company. Google responded by increasing the salaries and stock award packages of the DeepMind founders and their staff. They stayed put.”

    One developer at a leading lab wrote to me in October that, since the leadership of these labs typically truly believes AI will obviate the need for money, profit-seeking is “largely instrumental” for fundraising purposes. But “then the investors (whether it’s a VC firm or Microsoft) exert pressure for profit-seeking.”

    Between 2020 and 2022, more than $600 billion in corporate investment flowed into the industry, and a single 2021 AI conference hosted nearly thirty thousand researchers. At the same time, a September 2022 estimate found only four hundred full-time AI safety researchers, and the primary AI ethics conference had fewer than nine hundred attendees in 2023.

    The way software “ate the world,” we should expect AI to exhibit a similar winner-takes-all dynamic that will lead to even greater concentrations of wealth and power. Altman has predicted that the “cost of intelligence” will drop to near zero as a result of AI, and in 2021 he wrote that “even more power will shift from labor to capital.” He continued, “If public policy doesn’t adapt accordingly, most people will end up worse off than they are today.” Also in his “spicy take” thread, Jack Clark wrote, “economy-of-scale capitalism is, by nature, anti-democratic, and capex-intensive AI is therefore anti-democratic.”

    Markus Anderljung is the policy chief at GovAI, a leading AI safety think tank, and the first author on an influential white paper focused on regulating “frontier AI.” He wrote to me and said, “If you’re worried about capitalism in its current form, you should be even more worried about a world where huge parts of the economy are run by AI systems explicitly trained to maximize profit.”

    Sam Altman, circa June 2021, agreed, telling Ezra Klein about the founding philosophy of OpenAI: “One of the incentives that we were very nervous about was the incentive for unlimited profit, where more is always better. . . . And I think with these very powerful general purpose AI systems, in particular, you do not want an incentive to maximize profit indefinitely.”

    In a stunning move that has become widely seen as the biggest flash point in the AI safety debate so far, Open-AI’s nonprofit board fired CEO Sam Altman on November 17, 2023, the Friday before Thanksgiving. The board, per OpenAI’s unusual charter, has a fiduciary duty to “humanity,” rather than to investors or employees. As justification, the board vaguely cited Altman’s lack of candor but then ironically largely kept quiet about its decision.

    Around 3 a.m. the following Monday, Microsoft announced that Altman would be spinning up an advanced research lab with positions for every OpenAI employee, the vast majority of whom signed a letter threatening to take Microsoft’s offer if Altman wasn’t reinstated. (While he appears to be a popular CEO, it’s worth noting that the firing disrupted a planned sale of OpenAI’s employee-owned stock at a company valuation of $86 billion.) Just after 1 a.m. on Wednesday, OpenAI announced Altman’s return as CEO and two new board members: the former Twitter board chair, and former Treasury secretary Larry Summers.

    Within less than a week, OpenAI executives and Altman had collaborated with Microsoft and the company’s staff to engineer his successful return and the removal of most of the board members behind his firing. Microsoft’s first preference was having Altman back as CEO. The unexpected ouster initially sent the legacy tech giant’s stock plunging 5 percent ($140 billion), and the announcement of Altman’s reinstatement took it to an all-time high. Loath to be “blindsided” again, Microsoft is now taking a nonvoting seat on the nonprofit board.

    Immediately after Altman’s firing, X exploded, and a narrative largely fueled by online rumors and anonymously sourced articles emerged that safety-focused effective altruists on the board had fired Altman over his aggressive commercialization of OpenAI’s models at the expense of safety. Capturing the tenor of the overwhelming e/acc response, then pseudonymous founder @BasedBeffJezos posted, “EAs are basically terrorists. Destroying 80B of value overnight is an act of terrorism.”

    The picture that emerged from subsequent reporting was that a fundamental mistrust of Altman, not immediate concerns about AI safety, drove the board’s choice. The Wall Street Journal found that “there wasn’t one incident that led to their decision to eject Altman, but a consistent, slow erosion of trust over time that made them increasingly uneasy.”

    Weeks before the firing, Altman reportedly used dishonest tactics to try to remove board member Helen Toner over an academic paper she coauthored that he felt was critical of OpenAI’s commitment to AI safety. In the paper, Toner, an EA-aligned AI governance researcher, lauded Anthropic for avoiding “the kind of frantic corner-cutting that the release of ChatGPT appeared to spur.”

    The New Yorker reported that “some of the board’s six members found Altman manipulative and conniving.” Days after the firing, a DeepMind AI safety researcher who used to work for OpenAI wrote that Altman “lied to me on various occasions” and “was deceptive, manipulative, and worse to others,” an assessment echoed by recent reporting in Time.

    This wasn’t Altman’s first time being fired. In 2019, Y Combinator founder Paul Graham removed Altman from the incubator’s helm over concerns that he was prioritizing his own interests over those of the organization. Graham has previously said, “Sam is extremely good at becoming powerful.”

    OpenAI’s strange governance model was established specifically to prevent the corrupting influence of profit-seeking, but as the Atlantic rightly proclaimed, “the money always wins.” And more money than ever is going into advancing AI capabilities.
    Full Speed Ahead

    Recent AI progress has been driven by the culmination of many decades-long trends: increases in the amount of computing power (referred to as “compute”) and data used to train AI models, which themselves have been amplified by significant improvements in algorithmic efficiency. Since 2010, the amount of compute used to train AI models has increased roughly one-hundred-millionfold. Most of the advances we’re seeing now are the product of what was at the time a much smaller and poorer field.

    And while the last year has certainly contained more than its fair share of AI hype, the confluence of these three trends has led to quantifiable results. The time it takes AI systems to achieve human-level performance on many benchmark tasks has shortened dramatically in the last decade.

    It’s possible, of course, that AI capability gains will hit a wall. Researchers may run out of good data to use. Moore’s law — the observation that the number of transistors on a microchip doubles every two years — will eventually become history. Political events could disrupt manufacturing and supply chains, driving up compute costs. And scaling up systems may no longer lead to better performance.

    But the reality is that no one knows the true limits of existing approaches. A clip of a January 2022 Yann LeCun interview resurfaced on Twitter this year. LeCun said, “I don’t think we can train a machine to be intelligent purely from text, because I think the amount of information about the world that’s contained in text is tiny compared to what we need to know.” To illustrate his point, he gave an example: “I take an object, I put it on the table, and I push the table. It’s completely obvious to you that the object would be pushed with the table.” However, with “a text-based model, if you train a machine, as powerful as it could be, your ‘GPT-5000’ . . . it’s never gonna learn about this.”

    But if you give ChatGPT-3.5 that example, it instantly spits out the correct answer.

    In an interview published four days before his firing, Altman said, “Until we go train that model [GPT-5], it’s like a fun guessing game for us. We’re trying to get better at it, because I think it’s important from a safety perspective to predict the capabilities. But I can’t tell you here’s exactly what it’s going to do that GPT-4 didn’t.”

    History is littered with bad predictions about the pace of innovation. A New York Times editorial claimed it might take “one million to ten million years” to develop a flying machine — sixty-nine days before the Wright Brothers first flew. In 1933, Ernest Rutherford, the “father of nuclear physics,” confidently dismissed the possibility of a neutron-induced chain reaction, inspiring physicist Leo Szilard to hypothesize a working solution the very next day — a solution that ended up being foundational to the creation of the atomic bomb.

    One conclusion that seems hard to avoid is that, recently, the people who are best at building AI systems believe AGI is both possible and imminent. Perhaps the two leading AI labs, OpenAI and DeepMind, have been working toward AGI since their inception, starting when admitting you believed it was possible anytime soon could get you laughed out of the room. (Ilya Sutskever led a chant of “Feel the AGI” at OpenAI’s 2022 holiday party.)
    Perfect Workers

    Employers are already using AI to surveil, control, and exploit workers. But the real dream is to cut humans out of the loop. After all, as Marx wrote, “The machine is a means for producing surplus-value.”

    Open Philanthropy (OP) AI risk researcher Ajeya Cotra wrote to me that “the logical end point of a maximally efficient capitalist or market economy” wouldn’t involve humans because “humans are just very inefficient creatures for making money.” We value all these “commercially unproductive” emotions, she writes, “so if we end up having a good time and liking the outcome, it’ll be because we started off with the power and shaped the system to be accommodating to human values.”

    OP is an EA-inspired foundation financed by Facebook cofounder Dustin Moskovitz. It’s the leading funder of AI safety organizations, many of which are mentioned in this article. OP also granted $30 million to OpenAI to support AI safety work two years before the lab spun up a for-profit arm in 2019. I previously received a onetime grant to support publishing work at New York Focus, an investigative news nonprofit covering New York politics, from EA Funds, which itself receives funding from OP. After I first encountered EA in 2017, I began donating 10 to 20 percent of my income to global health and anti–factory farming nonprofits, volunteered as a local group organizer, and worked at an adjacent global poverty nonprofit. EA was one of the earliest communities to seriously engage with AI existential risk, but I looked at the AI folks with some wariness, given the uncertainty of the problem and the immense, avoidable suffering happening now.

    A compliant AGI would be the worker capitalists can only dream of: tireless, motivated, and unburdened by the need for bathroom breaks. Managers from Frederick Taylor to Jeff Bezos resent the various ways in which humans aren’t optimized for output — and, therefore, their employer’s bottom line. Even before the days of Taylor’s scientific management, industrial capitalism has sought to make workers more like the machines they work alongside and are increasingly replaced by. As The Communist Manifesto presciently observed, capitalists’ extensive use of machinery turns a worker into “an appendage of the machine.”

    But according to the AI safety community, the very same inhuman capabilities that would make Bezos salivate also make AGI a mortal danger to humans.
    Explosion: The Extinction Case

    The common x-risk argument goes: once AI systems reach a certain threshold, they’ll be able to recursively self-improve, kicking off an “intelligence explosion.” If a new AI system becomes smart — or just scaled up — enough, it will be able to permanently disempower humanity.

    The October “Managing AI Risks” paper states:

    There is no fundamental reason why AI progress would slow or halt when it reaches human-level abilities. . . . Compared to humans, AI systems can act faster, absorb more knowledge, and communicate at a far higher bandwidth. Additionally, they can be scaled to use immense computational resources and can be replicated by the millions.

    These features have already enabled superhuman abilities: LLMs can “read” much of the internet in months, and DeepMind’s AlphaFold can perform years of human lab work in a few days.

    Here’s a stylized version of the idea of “population” growth spurring an intelligence explosion: if AI systems rival human scientists at research and development, the systems will quickly proliferate, leading to the equivalent of an enormous number of new, highly productive workers entering the economy. Put another way, if GPT-7 can perform most of the tasks of a human worker and it only costs a few bucks to put the trained model to work on a day’s worth of tasks, each instance of the model would be wildly profitable, kicking off a positive feedback loop. This could lead to a virtual “population” of billions or more digital workers, each worth much more than the cost of the energy it takes to run them. Sutskever thinks it’s likely that “the entire surface of the earth will be covered with solar panels and data centers.”

    These digital workers might be able to improve on our AI designs and bootstrap their way to creating “superintelligent” systems, whose abilities Alan Turing speculated in 1951 would soon “outstrip our feeble powers.” And, as some AI safety proponents argue, an individual AI model doesn’t have to be superintelligent to pose an existential threat; there might just need to be enough copies of it. Many of my sources likened corporations to superintelligences, whose capabilities clearly exceed those of their constituent members.

    “Just unplug it,” goes the common objection. But once an AI model is powerful enough to threaten humanity, it will probably be the most valuable thing in existence. You might have an easier time “unplugging” the New York Stock Exchange or Amazon Web Services.

    A lazy superintelligence may not pose much of a risk, and skeptics like Allen Institute for AI CEO Oren Etzioni, complexity professor Melanie Mitchell, and AI Now Institute managing director Sarah Myers West all told me they haven’t seen convincing evidence that AI systems are becoming more autonomous. Anthropic’s Dario Amodei seems to agree that current systems don’t exhibit a concerning level of agency. However, a completely passive but sufficiently powerful system wielded by a bad actor is enough to worry people like Bengio.

    Further, academics and industrialists alike are increasing efforts to make AI models more autonomous. Days prior to his firing, Altman told the Financial Times: “We will make these agents more and more powerful . . . and the actions will get more and more complex from here. . . . The amount of business value that will come from being able to do that in every category, I think, is pretty good.”
    What’s Behind the Hype?

    The fear that keeps many x-risk people up at night is not that an advanced AI would “wake up,” “turn evil,” and decide to kill everyone out of malice, but rather that it comes to see us as an obstacle to whatever goals it does have. In his final book, Brief Answers to the Big Questions, Stephen Hawking articulated this, saying, “You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants.”

    Unexpected and undesirable behaviors can result from simple goals, whether it’s profit or an AI’s reward function. In a “free” market, profit-seeking leads to monopolies, multi-level marketing schemes, poisoned air and rivers, and innumerable other harms.

    There are abundant examples of AI systems exhibiting surprising and unwanted behaviors. A program meant to eliminate sorting errors in a list deleted the list entirely. One researcher was surprised to find an AI model “playing dead” to avoid being identified on safety tests.

    Yet others see a Big Tech conspiracy looming behind these concerns. Some people focused on immediate harms from AI argue that the industry is actively promoting the idea that their products might end the world, like Myers West of the AI Now Institute, who says she “see[s] the narratives around so-called existential risk as really a play to take all the air out of the room, in order to ensure that there’s not meaningful movement in the present moment.” Strangely enough, Yann LeCun and Baidu AI chief scientist Andrew Ng purport to agree.

    When I put the idea to x-risk believers, they often responded with a mixture of confusion and exasperation. OP’s Ajeya Cotra wrote back: “I wish it were less of an industry-associated thing to be concerned about x-risk, because I think it’s just really fundamentally, on the merits, a very anti-industry belief to have. . . . If the companies are building things that are going to kill us all, that’s really bad, and they should be restricted very stringently by the law.”

    GovAI’s Markus Anderljung called fears of regulatory capture “a natural reaction for folks to have,” but he emphasized that his preferred policies may well harm the industry’s biggest players.

    One understandable source of suspicion is that Sam Altman is now one of the people most associated with the existential risk idea, but his company has done more than any other to advance the frontier of general-purpose AI.

    Additionally, as OpenAI got closer to profitability and Altman got closer to power, the CEO changed his public tune. In a January 2023 Q and A, when asked about his worst-case scenario for AI, he replied, “Lights out for all of us.” But while answering a similar question under oath before senators in May, Altman doesn’t mention extinction. And, in perhaps his last interview before his firing, Altman said, “I actually don’t think we’re all going to go extinct. I think it’s going to be great. I think we’re heading towards the best world ever.”

    Altman implored Congress in May to regulate the AI industry, but a November investigation found that OpenAI’s quasi-parent company Microsoft was influential in the ultimately unsuccessful lobbying to exclude “foundation models” like ChatGPT from regulation by the forthcoming EU AI Act. And Altman did plenty of his own lobbying in the EU, even threatening to pull out of the region if regulations became too onerous (threats he quickly walked back). Speaking on a CEO panel in San Francisco days before his ouster, Altman said that “current models are fine. We don’t need heavy regulation here. Probably not even for the next couple of generations.”

    President Joe Biden’s recent “sweeping” executive order on AI seems to agree: its safety test information sharing requirements only affect models larger than any that have likely been trained so far. Myers West called these kinds of “scale thresholds” a “massive carveout.” Anderljung wrote to me that regulation should scale with a system’s capabilities and usage, and said that he “would like some regulation of today’s most capable and widely used models,” but he thinks it will “be a lot more politically viable to impose requirements on systems that are yet to be developed.”

    Inioluwa Deborah Raji ventured that if the tech giants “know that they have to be the bad guy in some dimension . . . they would prefer for it to be abstract and long-term in timeline.” This sounds far more plausible to me than the idea that Big Tech actually wants to promote the idea that their products have a decent chance of literally killing everyone.

    Nearly seven hundred people signed the extinction letter, the majority of them academics. Only one of them runs a publicly traded company: OP funder Moskovitz, who is also cofounder and CEO of Asana, a productivity app. There were zero employees from Amazon, Apple, IBM, or any leading AI hardware firms. No Meta executives signed.

    If the heads of the Big Tech firms wanted to amplify the extinction narrative, why haven’t they added their names to the list?
    Why Build the “Doom Machine?”

    If AI actually does save the world, whoever created it may hope to be lauded like a modern Julius Caesar. And even if it doesn’t, whoever first builds “the last invention that man need ever make” will not have to worry about being forgotten by history — unless, of course, history ends abruptly after their invention.

    Connor Leahy thinks that, on our current path, the end of history will shortly follow the advent of AGI. With his flowing hair and unkempt goatee, he would probably look at home wearing a sandwich board reading “The end is nigh” — though that hasn’t prevented him from being invited to address the British House of Lords or CNN. The twenty-eight-year-old CEO of Conjecture and cofounder of EleutherAI, an influential open-source collective, told me that a lot of the motivation to build AI boils down to: “‘Oh, you’re building the ultimate doom machine that makes you billions of dollars and also king-emperor of earth or kills everybody?’ Yeah, that’s like the masculine dream. You’re like, ‘Fuck yeah. I am the doom king.’” He continues, “Like, I get it. This is very much in the Silicon Valley aesthetic.”

    Leahy also conveyed some-thing that won’t surprise people who have spent significant time in the Bay Area or certain corners of the internet:

    There are actual, completely unaccountable, unelected, techno-utopian businesspeople and technologists, living mostly in San Francisco, who are willing to risk the lives of you, your children, your grandchildren, and all of future humanity just because they might have a chance to live forever.

    In March, the MIT Technology Review reported that Altman “says he’s emptied his bank account to fund two . . . goals: limitless energy and extended life span.”

    Given all this, you might expect the ethics community to see the safety community as a natural ally in a common struggle to reign in unaccountable tech elites who are unilaterally building risky and harmful products. And, as we saw earlier, many safety advocates have made overtures to the AI ethicists. It’s also rare for people from the x-risk community to publicly attack AI ethics (while the reverse is . . . not true), but the reality is that safety proponents have sometimes been hard to stomach.

    AI ethicists, like the people they advocate for, often report feeling marginalized and cut off from real power, fighting an uphill battle with tech companies who see them as a way to cover their asses rather than as a true priority. Lending credence to this feeling is the gutting of AI ethics teams at many Big Tech companies in recent years (or days). And, in a number of cases, these companies have retaliated against ethics-oriented whistleblowers and labor organizers.

    This doesn’t necessarily imply that these companies are instead seriously prioritizing x-risk. Google DeepMind’s ethics board, which included Larry Page and prominent existential risk researcher Toby Ord, had its first meeting in 2015, but it never had a second one. One Google AI researcher wrote to me that they “don’t talk about long-term risk . . . in the office,” continuing, “Google is more focused on building the tech and on safety in the sense of legality and offensiveness.”

    Software engineer Timnit Gebru co-led Google’s ethical AI team until she was forced out of the company in late 2020 following a dispute over a draft paper — now one of the most famous machine learning publications ever. In the “stochastic parrots” paper, Gebru and her coauthors argue that LLMs damage the environment, amplify social biases, and use statistics to “haphazardly” stitch together language “without any reference to meaning.”

    Gebru, who is no fan of the AI safety community, has called for enhanced whistleblower protections for AI researchers, which are also one of the main recommendations made in GovAI’s white paper. Since Gebru was pushed out of Google, nearly 2,700 staffers have signed a solidaristic letter, but then Googler Geoff Hinton was not one of them. When asked on CNN why he didn’t support a fellow whistleblower, Hinton replied that Gebru’s critiques of AI “were rather different concerns from mine” that “aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.”

    Raji told me that “a lot of cause for frustration and animosity” between the ethics and safety camps is that “one side has just way more money and power than the other side,” which “allows them to push their agenda way more directly.”

    According to one estimate, the amount of money moving into AI safety start-ups and nonprofits in 2022 quadrupled since 2020, reaching $144 million. It’s difficult to find an equivalent figure for the AI ethics community. However, civil society from either camp is dwarfed by industry spending. In just the first quarter of 2023, OpenSecrets reported roughly $94 million was spent on AI lobbying in the United States. LobbyControl estimated tech firms spent €113 million this year lobbying the EU, and we’ll recall that hundreds of billions of dollars are being invested in the AI industry as we speak.

    One thing that may drive the animosity even more than any perceived difference in power and money is the trend line. Following widely praised books like 2016’s Weapons of Math Destruction, by data scientist Cathy O’Neil, and bombshell discoveries of algorithmic bias, like the 2018 “Gender Shades” paper by Buolamwini and Gebru, the AI ethics perspective had captured the public’s attention and support.

    In 2014, the AI x-risk cause had its own surprise bestseller, philosopher Nick Bostrom’s Superintelligence, which argued that beyond-human AI could lead to extinction and earned praise from figures like Elon Musk and Bill Gates. But Yudkowsky told me that, pre-ChatGPT, outside of certain Silicon Valley circles, seriously entertaining the book’s thesis would make people look at you funny. Early AI safety proponents like Yudkowsky have occupied the strange position of maintaining close ties to wealth and power through Bay Area techies while remaining marginalized in the wider discourse.

    In the post-ChatGPT world, Turing recipients and Nobel laureates are coming out of the AI safety closet and embracing arguments popularized by Yudkowsky, whose best-known publication is a piece of Harry Potter fan fiction totaling more than 660,000 words.

    Perhaps the most shocking portent of this new world was broadcast in November, when the hosts of a New York Times tech podcast, Hard Fork, asked the Federal Trade Commission chair: “What is your p(doom), Lina Khan? What is your probability that AI will kill us all?” EA water cooler talk has gone mainstream. (Khan said she’s “an optimist” and gave a “low” estimate of 15 percent.)

    It would be easy to observe all the open letters and media cycles and think that the majority of AI researchers are mobilizing against existential risk. But when I asked Bengio about how x-risk is perceived today in the machine learning community, he said, “Oh, it’s changed a lot. It used to be, like, 0.1 percent of people paid attention to the question. And maybe now it’s 5 percent.”
    Probabilities

    Like many others concerned about AI x-risk, the renowned philosopher of mind David Chalmers made a probabilistic argument during our conversation: “This is not a situation where you have to be 100 percent certain that we’ll have human-level AI to worry about it. If it’s 5 percent, that’s something we have to worry about.”

    This kind of statistical thinking is popular in the EA community and is a large part of what led its members to focus on AI in the first place. If you defer to expert arguments, you could end up more confused. But if you try to average the expert concern from the handful of surveys, you might end up thinking there’s at least a few-percent chance that AI extinction could happen, which could be enough to make it the most important thing in the world. And if you put any value on all the future generations that could exist, human extinction is categorically worse than survivable catastrophes.

    However, in the AI debate, allegations of arrogance abound. Skeptics like Melanie Mitchell and Oren Etzioni told me there wasn’t evidence to support the x-risk case, while believers like Bengio and Leahy point to surprising capability gains and ask: What if progress doesn’t stop? An academic AI researcher friend has likened the advent of AGI to throwing global economics and politics into a blender.

    Even if, for some reason, AGI can only match and not exceed human intelligence, the prospect of sharing the earth with an almost arbitrarily large number of human-level digital agents is terrifying, especially when they’ll probably be trying to make someone money.

    There are far too many policy ideas about how to reduce existential risk from AI to properly discuss here. But one of the clearer messages coming from the AI safety community is that we should “slow down.” Advocates for such a deceleration hope it would give policymakers and broader society a chance to catch up and actively decide how a potentially transformative technology is developed and deployed.
    International Cooperation

    One of the most common responses to any effort to regulate AI is the “but China!” objection. Altman, for example, told a Senate committee in May that “we want America to lead” and acknowledged that a peril of slowing down is that “China or somebody else makes faster progress.”

    Anderljung wrote to me that this “isn’t a strong enough reason not to regulate AI.”

    In a June Foreign Affairs article, Helen Toner and two political scientists reported that the Chinese AI researchers they interviewed thought Chinese LLMs are at least two to three years behind the American state-of-the-art models. Further, the authors argue that since Chinese AI advances “rely a great deal on reproducing and tweaking research published abroad,” a unilateral slowdown “would likely decelerate” Chinese progress as well. China has also moved faster than any other major country to meaningfully regulate AI, as Anthropic policy chief Jack Clark has observed.

    Yudkowsky says, “It’s not actually in China’s interest to commit suicide along with the rest of humanity.”

    If advanced AI really threatens the whole world, domestic regulation alone won’t cut it. But robust national restrictions could credibly signal to other countries how seriously you take the risks. Prominent AI ethicist Rumman Chowdhury has called for global oversight. Bengio says we “have to do both.”

    Yudkowsky, unsurprisingly, has taken a maximalist position, telling me that “the correct direction looks more like putting all of the AI hardware into a limited number of data centers under international supervision by bodies with a symmetric treaty whereby nobody — including the militaries, governments, China, or the CIA — can do any of the really awful things, including building superintelligences.”

    In a controversial Time op-ed from March, Yudkowsky argued to “shut it all down” by establishing an international moratorium on “new large training runs” backed by the threat of military force. Given Yudkowsky’s strong beliefs that advanced AI would be much more dangerous than any nuclear or biological weapon, this radical stance follows naturally.

    All twenty-eight countries at the recent AI Safety Summit, including the United States and China, signed the Bletchley Declaration, which recognized existing harms from AI and the fact that “substantial risks may arise from potential intentional misuse or unintended issues of control relating to alignment with human intent.”

    At the summit, the hosting British government commissioned Bengio to lead production of the first “State of the Science” report on the “capabilities and risks of frontier AI,” in a significant step toward a permanent expert body like the Intergovernmental Panel on Climate Change.

    Cooperation between the United States and China will be imperative for meaningful international coordination on AI development. And when it comes to AI, the two countries aren’t exactly on the best terms. With the 2022 CHIPS Act export controls, the United States tried to kneecap China’s AI capabilities, something an industry analyst would have previously considered an “act of war.” As Jacobin reported in May, some x-risk-oriented policy researchers likely played a role in passing the onerous controls. In October, the United States tightened CHIPS Act restrictions to close loopholes.

    However, in an encouraging sign, Biden and Xi Jinping discussed AI safety and a ban on AI in lethal weapons systems in November. A White House press release stated, “The leaders affirmed the need to address the risks of advanced AI systems and improve AI safety through U.S.-China government talks.”

    Lethal autonomous weapons are also an area of relative agreement in the AI debates. In her new book Unmasking AI: My Mission to Protect What Is Human in a World of Machines, Joy Buolamwini advocates for the Stop Killer Robots campaign, echoing a longtime concern of many AI safety proponents. The Future of Life Institute, an x-risk organization, assembled ideological opponents to sign a 2016 open letter calling for a ban on offensive LAWs, including Bengio, Hinton, Sutton, Etzioni, LeCun, Musk, Hawking, and Noam Chomsky.
    A Seat at the Table

    After years of inaction, the world’s governments are finally turning their attention to AI. But by not seriously engaging with what future systems could do, socialists are ceding their seat at the table.

    In no small part because of the types of people who became attracted to AI, many of the earliest serious adopters of the x-risk idea decided to either engage in extremely theoretical research on how to control advanced AI or started AI companies. But for a different type of person, the response to believing that AI could end the world is to try to get people to stop building it.

    Boosters keep saying that AI development is inevitable — and if enough people believe it, it becomes true. But “there is nothing about artificial intelligence that is inevitable,” writes the AI Now Institute. Managing director Myers West echoed this, mentioning that facial recognition technology looked inevitable in 2018 but has since been banned in many places. And as x-risk researcher Katja Grace points out, we shouldn’t feel the need to build every technology simply because we can.

    Additionally, many policymakers are looking at recent AI advances and freaking out. Senator Mitt Romney is “more terrified about AI” than optimistic, and his colleague Chris Murphy says, “The consequences of so many human functions being outsourced to AI is potentially disastrous.” Congresspeople Ted Lieu and Mike Johnson are literally “freaked out” by AI. If certain techies are the only people willing to acknowledge that AI capabilities have dramatically improved and could pose a species-level threat in the future, that’s who policymakers will disproportionately listen to. In May, professor and AI ethicist Kristian Lum tweeted: “There’s one existential risk I’m certain LLMs pose and that’s to the credibility of the field of FAccT [Fairness, Accountability, and Transparency] / Ethical AI if we keep pushing the snake oil narrative about them.”

    Even if the idea of AI-driven extinction strikes you as more fi than sci, there could still be enormous impact in influencing how a transformative technology is developed and what values it represents. Assuming we can get a hypothetical AGI to do what we want raises perhaps the most important question humanity will ever face: What should we want it to want?

    When I asked Chalmers about this, he said, “At some point we recapitulate all the questions of political philosophy: What kind of society do we actually want and actually value?”

    One way to think about the advent of human-level AI is that it would be like creating a new country’s constitution (Anthropic’s “constitutional AI” takes this idea literally, and the company recently experimented with incorporating democratic input into its model’s foundational document). Governments are complex systems that wield enormous power. The foundation upon which they’re established can influence the lives of millions now and in the future. Americans live under the yoke of dead men who were so afraid of the public, they built antidemocratic measures that continue to plague our political system more than two centuries later.

    AI may be more revolutionary than any past innovation. It’s also a uniquely normative technology, given how much we build it to reflect our preferences. As Jack Clark recently mused to Vox, “It’s a real weird thing that this is not a government project.” Chalmers said to me, “Once we suddenly have the tech companies trying to build these goals into AI systems, we have to really trust the tech companies to get these very deep social and political questions right. I’m not sure I do.” He emphasized, “You’re not just in technical reflection on this but in social and political reflection.”
    False Choices

    We may not need to wait to find superintelligent systems that don’t prioritize humanity. Superhuman agents ruthlessly optimize for a reward at the expense of anything else we might care about. The more capable the agent and the more ruthless the optimizer, the more extreme the results.

    Sound familiar? If so, you’re not alone. The AI Objectives Institute (AOI) looks at both capitalism and AI as examples of misaligned optimizers. Cofounded by former public radio show host Brittney Gallagher and “privacy hero” Peter Eckersley shortly before his unexpected death, the research lab examines the space between annihilation and utopia, “a continuation of existing trends of concentration of power in fewer hands — super-charged by advancing AI — rather than a sharp break with the present.” AOI president Deger Turan told me, “Existential risk is failure to coordinate in the face of a risk.” He says that “we need to create bridges between” AI safety and AI ethics.

    One of the more influential ideas in x-risk circles is the unilateralist’s curse, a term for situations in which a lone actor can ruin things for the whole group. For example, if a group of biologists discovers a way to make a disease more deadly, it only takes one to publish it. Over the last few decades, many people have become convinced that AI could wipe out humanity, but only the most ambitious and risk-tolerant of them have started the companies that are now advancing the frontier of AI capabilities, or, as Sam Altman recently put it, pushing the “veil of ignorance back.” As the CEO alludes, we have no way of truly knowing what lies beyond the technological limit.

    Some of us fully understand the risks but plow forward anyway. With the help of top scientists, ExxonMobil had discovered conclusively by 1977 that their product caused global warming. They then lied to the public about it, all while building their oil platforms higher.

    The idea that burning carbon could warm the climate was first hypothesized in the late nineteenth century, but the scientific consensus on climate change took nearly one hundred years to form. The idea that we could permanently lose control to machines is older than digital computing, but it remains far from a scientific consensus. And if recent AI progress continues at pace, we may not have decades to form a consensus before meaningfully acting.

    The debate playing out in the public square may lead you to believe that we have to choose between addressing AI’s immediate harms and its inherently speculative existential risks. And there are certainly trade-offs that require careful consideration.

    But when you look at the material forces at play, a different picture emerges: in one corner are trillion-dollar companies trying to make AI models more powerful and profitable; in another, you find civil society groups trying to make AI reflect values that routinely clash with profit maximization.

    In short, it’s capitalism versus humanity.

    #intelligence_artificielle #politique #disruptoin #

    • J’ai survolé, et il me semble que l’article n’évoque jamais le transhumanisme de Ray Kurzweil, qui est pourtant l’idéologie quasi religieuse particulièrement en vogue en Californie. Et dont Larry Page est connu pour être un des importants mécènes.

      Or dans le texte, ça transparaît en permanence, et même les critiques des développements de l’IA semblent ici largement y adhérer.

  • Au 23 Décembre 2024, l’armée israélienne avait déjà tué 20 des 105 soldats tués à Gaza. Tirs amis ou accidents Time of israel

    Sur les 105 soldats tués dans la bande de Gaza au cours de l’offensive terrestre d’Israël contre le Hamas, qui a commencé fin octobre, 20 ont été tués par des tirs « amis » et d’autres au cours d’accidents, selon de nouvelles données publiées par l’armée israélienne mardi.

    Treize des soldats ont été tués par des tirs amis dus à une erreur d’identification, y compris lors de frappes aériennes, de tirs de chars et de tirs d’armes à feu.

    Un soldat a été tué par un tir qui ne l’a pas atteint intentionnellement, et deux autres ont été tués par des tirs accidentels. Deux soldats ont été tués dans des incidents au cours desquels des véhicules blindés ont écrasé des troupes.

    Enfin, deux soldats ont été tués par des éclats d’explosifs déclenchés intentionnellement par les forces israéliennes.

    Selon l’armée israélienne, il y aurait une multitude de raisons à l’origine de ces accidents mortels, comme le grand nombre de forces opérant dans la bande de Gaza, les problèmes de communication entre les forces et la fatigue des soldats, qui les rend peu attentifs aux réglementations.
    . . . . .

    #Palestine #israel #israël #tsahal #Gaza #Hamas #armée #bavures #IA #Palestine_assassinée #guerre #intelligence_artificielle

    Source : https://fr.timesofisrael.com/tsahal-20-des-105-soldats-tues-a-gaza-ont-ete-victimes-de-tirs-ami

  • La Tribune : Amazon abandonne ses magasins sans caisse... en réalité gérés par des travailleurs indiens à distance Marine Protais

    Le géant du e-commerce, qui opère également des magasins physiques, renonce à sa technologie Just Walk Out dans ses supermarchés Amazon Fresh aux États-Unis. Ce système permet à ses clients de faire leurs emplettes sans passer par l’étape de la caisse. Mais il nécessite des caméras, des capteurs et surtout le travail de 1.000 travailleurs indiens, donnant l’illusion de l’automatisation.


    Pour faire ses courses dans les supermarchés Amazon, il suffisait d’entrer, de scanner un QR code sur une application, de prendre ses produits et de sortir. (Crédits : Amazon)

    En 2016, on les annonçait comme le futur du commerce. Plus besoin de caissiers, ni de vigiles, ni même de sortir votre portefeuille. Pour faire vos courses dans les supermarchés Amazon, il suffisait d’entrer, de scanner un QR code sur une application, de prendre vos produits et de sortir. Le montant de vos achats était calculé à la sortie du magasin grâce à un système mêlant caméras et capteurs décrit comme automatique, puis directement débité sur votre carte bancaire.

    Mais nous voici en 2024, et le géant du e-commerce, diversifié dans les magasins physiques, abandonne en partie cette technologie, nous apprend le média américain The Information https://www.theinformation.com/articles/amazons-grocery-stores-to-drop-just-walk-out-checkout-tech . Elle sera supprimée des 27 magasins « Amazon Fresh » américains (des supermarchés où l’on trouve des produits frais), où elle était installée. En guise de remplacement, ces magasins seront équipés de caddies « intelligents », capables de scanner automatiquement les produits, rapporte le média d’investigation américain. L’information a ensuite été confirmée auprès d’AP https://apnews.com/article/amazon-fresh-just-walk-out-bb36bb24803bd56747c6f99814224265 par un porte-parole de l’entreprise. Le système Just Walk Out restera pour le moment dans les plus petites boutiques « Amazon Go », et chez la centaine de partenaires de la firme.

    L’illusion de l’automatisation
    Pour se passer de caissier sur place, le système « Just Walk Out » nécessite son lot de caméras et de capteurs, permettant de suivre le client en magasin, mais surtout d’humains, chargés de vérifier à distance les achats des clients via les caméras. The Information rapporte que plus de 1.000 personnes en Inde sont chargées de ce travail.

    En plus de cette automatisation illusoire, le système « Just Walk Out » faisait depuis quelques années l’objet de critiques. Les clients se plaignent de tickets de caisse reçus des heures après leurs achats, ou de commandes mal gérées par le système. En 2023, la firme avait d’ailleurs annoncé une réorganisation de ses magasins, pour rendre les technologies moins visibles et l’ambiance moins froide. Et le rythme d’ouvertures des enseignes avait été revu à la baisse.

    Par ailleurs, la technologie soulève des questions quant à la protection de la vie privée. Fin 2023, plusieurs consommateurs ont lancé une class action, accusant Amazon de collecter les données biométriques des clients, la forme de leur main et de leur visage ainsi que la tonalité de leur voix, via le système Just Walk Out sans demander leur consentement. Une pratique contraire à une loi de l’Illinois sur le traitement des données biométriques.

    Les entrepôts « automatisés » d’Amazon également surveillés par des travailleurs indiens
    Comme le note le chercheur Antonio Casilli, spécialiste du « travail du clic », cette histoire est banale. Sur X, il rappelle qu’en 2023, Time nous apprenait qu’Alexa, l’assistant virtuel de l’entreprise de Seattle, fonctionnait grâce à l’écoute de 30.000 travailleurs qui annotaient les conversations des utilisateurs pour améliorer les algorithmes gérant l’assistant.

    Et en 2022, The Verge rapportait que les entrepôts automatisés d’Amazon nécessitaient le travail de vigiles, à distance toujours, de travailleurs au Costa-Rica et en Inde, chargés de regarder les images des caméras plus de 40 heures par semaine pour 250 dollars par mois.

    #IA#intelligence_artificielle : #Fumisterie , #arnaque ou #escroquerie ? #amazon #caméras #capteurs #automatisation #technologie #travail #Entrepôts #algorithmes #Alexa

    Source : https://www.latribune.fr/technos-medias/informatique/amazon-abandonne-ses-magasins-sans-caisse-en-realite-geres-par-des-travail

    • Amazon : pourquoi la tech autonome “Just Walk Out” passe à la trappe
      Confirmation sur le blog d’Olivier Dauvers, le web grande conso

      Amazon vient d’annoncer l’abandon de la technologie Just Walk Out dans ses magasins Fresh aux États-Unis (une cinquantaine d’unités dont la moitié sont équipés). Just Walk Out c’est la techno, totalement bluffante, de magasin autonome sans caisses que je vous ai montrée en vidéo dès 2020 (ici) ou encore à Washington et Los Angeles dans de vrais formats de supermarché Whole Foods (ici et là). 

      Des centaines de caméras dopées à l’IA au plafond couplées à des balances sur les étagères permettent de pister l’intégralité du parcours d’achat du client, lequel s’affranchit du passage en caisse. Bluffant (vraiment) je vous dis. 


      un de ces magasins où l’être humain est bani

      Appelons un chat un chat, pour Amazon, ce revirement est un aveu d’échec cuisant. Car la vente de ses technos est au cœur du modèle économique d’Amazon dans le retail physique. Si le groupe lui-même ne parvient pas à prouver la viabilité de Just Walk Out, quel concurrent irait l’acheter ?

      Ce qu’il faut retenir de cet abandon ? Que les technos de magasins autonomes ne sont, pour l’heure, déployables que sur de (très) petits formats bénéficiant d’un flux clients très élevé. Pour des raisons assez évidentes de Capex/m2… mais aussi de supervision humaine. Car, à date, l’IA seule n’est pas en mesure de gérer tous les scénarios de course (dont les tentatives de démarque), obligeant un visionnage de contrôle par l’humain (localisé dans des pays à bas salaire). 

      #techno #échec

      Source : https://www.olivierdauvers.fr/2024/04/04/amazon-pourquoi-la-tech-autonome-just-walk-out-passe-a-la-trappe

  • How Hollywood writers triumphed over AI – and why it matters | US writers’ strike 2023 | The Guardian
    https://www.theguardian.com/culture/2023/oct/01/hollywood-writers-strike-artificial-intelligence
    https://i.guim.co.uk/img/media/689771a4945d1c8d8a88bf3f3d759512c6110153/0_221_5292_3175/master/5292.jpg?width=1200&height=630&quality=85&auto=format&fit=crop&overlay-ali

    Hollywood writers scored a major victory this week in the battle over artificial intelligence with a new contract featuring strong guardrails in how the technology can be used in film and television projects.

    With terms of AI use finally agreed, some writers are breathing easier – for now – and experts say the guidelines could offer a model for workers in Hollywood and other industries. The writers’ contract does not outlaw the use of AI tools in the writing process, but it sets up guardrails to make sure the new technology stays in the control of workers, rather than being used by their bosses to replace them.

    The new rules guard against several scenarios that writers had feared, comedian Adam Conover, a member of the WGA negotiating committee, told the Guardian. One such scenario was studios being allowed to generate a full script using AI tools, and then demanding that human writer complete the writing process.

    Under the new terms, studios “cannot use AI to write scripts or to edit scripts that have already been written by a writer”, Conover says. The contract also prevents studios from treating AI-generated content as “source material”, like a novel or a stage play, that screenwriters could be assigned to adapt for a lower fee and less credit than a fully original script.

    For instance, if the studios were allowed to use chatGPT to generate a 100,000-word novel and then ask writers to adapt it, “That would be an easy loophole for them to reduce the wages of screenwriters,” Conover said. “We’re not allowing that.” If writers adapt output from large language models, it will still be considered an original screenplay, he said.

    Simon Johnson, an economist at MIT who studies technological transformation, called the new terms a “fantastic win for writers”, and said that it would likely result in “better quality work and a stronger industry for longer”.

    #Intelligence_artificielle #Scénaristes #Hollywood #Grève

  • Is “Deep Learning” a Revolution in Artificial Intelligence ? | The New Yorker
    https://www.newyorker.com/news/news-desk/is-deep-learning-a-revolution-in-artificial-intelligence

    Intéressant de relire un article sur l’IA qui a 12 ans.
    Comment la technologie a progressé rapidement. Et dans le même temps, comment les interrogations subsistent.

    By Gary Marcus
    November 25, 2012

    Can a new technique known as deep learning revolutionize artificial intelligence, as yesterday’s front-page article at the New York Times suggests? There is good reason to be excited about deep learning, a sophisticated “machine learning” algorithm that far exceeds many of its predecessors in its abilities to recognize syllables and images. But there’s also good reason to be skeptical. While the Times reports that “advances in an artificial intelligence technology that can recognize patterns offer the possibility of machines that perform human activities like seeing, listening and thinking,” deep learning takes us, at best, only a small step toward the creation of truly intelligent machines. Deep learning is important work, with immediate practical applications. But it’s not as breathtaking as the front-page story in the New York Times seems to suggest.

    The technology on which the Times focusses, deep learning, has its roots in a tradition of “neural networks” that goes back to the late nineteen-fifties. At that time, Frank Rosenblatt attempted to build a kind of mechanical brain called the Perceptron, which was billed as “a machine which senses, recognizes, remembers, and responds like the human mind.” The system was capable of categorizing (within certain limits) some basic shapes like triangles and squares. Crowds were amazed by its potential, and even The New Yorker was taken in, suggesting that this “remarkable machine…[was] capable of what amounts to thought.”

    But the buzz eventually fizzled; a critical book written in 1969 by Marvin Minsky and his collaborator Seymour Papert showed that Rosenblatt’s original system was painfully limited, literally blind to some simple logical functions like “exclusive-or” (As in, you can have the cake or the pie, but not both). What had become known as the field of “neural networks” all but disappeared.

    Rosenblatt’s ideas reëmerged however in the mid-nineteen-eighties, when Geoff Hinton, then a young professor at Carnegie-Mellon University, helped build more complex networks of virtual neurons that were able to circumvent some of Minsky’s worries. Hinton had included a “hidden layer” of neurons that allowed a new generation of networks to learn more complicated functions (like the exclusive-or that had bedeviled the original Perceptron). Even the new models had serious problems though. They learned slowly and inefficiently, and as Steven Pinker and I showed, couldn’t master even some of the basic things that children do, like learning the past tense of regular verbs. By the late nineteen-nineties, neural networks had again begun to fall out of favor.

    Hinton soldiered on, however, making an important advance in 2006, with a new technique that he dubbed deep learning, which itself extends important earlier work by my N.Y.U. colleague, Yann LeCun, and is still in use at Google, Microsoft, and elsewhere. A typical setup is this: a computer is confronted with a large set of data, and on its own asked to sort the elements of that data into categories, a bit like a child who is asked to sort a set of toys, with no specific instructions. The child might sort them by color, by shape, or by function, or by something else. Machine learners try to do this on a grander scale, seeing, for example, millions of handwritten digits, and making guesses about which digits looks more like one another, “clustering” them together based on similarity. Deep learning’s important innovation is to have models learn categories incrementally, attempting to nail down lower-level categories (like letters) before attempting to acquire higher-level categories (like words).

    VIDEO FROM THE NEW YORKER

    Friday Night Blind: Bowling Without Sight

    Deep learning excels at this sort of problem, known as unsupervised learning. In some cases it performs far better than its predecessors. It can, for example, learn to identify syllables in a new language better than earlier systems. But it’s still not good enough to reliably recognize or sort objects when the set of possibilities is large. The much-publicized Google system that learned to recognize cats for example, works about seventy per cent better than its predecessors. But it still recognizes less than a sixth of the objects on which it was trained, and it did worse when the objects were rotated or moved to the left or right of an image.

    Realistically, deep learning is only part of the larger challenge of building intelligent machines. Such techniques lack ways of representing causal relationships (such as between diseases and their symptoms), and are likely to face challenges in acquiring abstract ideas like “sibling” or “identical to.” They have no obvious ways of performing logical inferences, and they are also still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used. The most powerful A.I. systems, like Watson, the machine that beat humans in “Jeopardy,” use techniques like deep learning as just one element in a very complicated ensemble of techniques, ranging from the statistical technique of Bayesian inference to deductive reasoning.

    In August, I had the chance to speak with Peter Norvig, Director of Google Research, and asked him if he thought that techniques like deep learning could ever solve complicated tasks that are more characteristic of human intelligence, like understanding stories, which is something Norvig used to work on in the nineteen-eighties. Back then, Norvig had written a brilliant review of the previous work on getting machines to understand stories, and fully endorsed an approach that built on classical “symbol-manipulation” techniques. Norvig’s group is now working within Hinton, and Norvig is clearly very interested in seeing what Hinton could come up with. But even Norvig didn’t see how you could build a machine that could understand stories using deep learning alone.

    To paraphrase an old parable, Hinton has built a better ladder; but a better ladder doesn’t necessarily get you to the moon.

    Gary Marcus, Professor of Psychology at N.Y.U., is author of “Guitar Zero: The Science of Becoming Musical at Any Age” and “Kluge: The Haphazard Evolution of The Human Mind.”

    Photograph by Frederic Lewis/Archive Photos/Getty.

    #Intelligence_artificielle #Connexionnisme #Histoire

  • ChatGPT : face aux artifices de l’IA, comment l’éducation aux médias peut aider les élèves
    https://theconversation.com/chatgpt-face-aux-artifices-de-lia-comment-leducation-aux-medias-peu

    Comme le montre une étude de la Columbia Journalism Review, la panique n’a pas commencé en décembre 2022 avec l’événement lancé par OpenAI mais en février 2023 avec les annonces de Microsoft et Google, chacun y allant de son chatbot intégré dans leur moteur de recherche (Bing Chat et Bard, respectivement). La couverture médiatique opère un brouillage informationnel, se focalisant davantage sur le potentiel remplacement de l’humain que sur la réelle concentration de la propriété de l’IA dans les mains de quelques entreprises.

    Comme toute panique médiatique (les plus récentes étant celles sur la réalité virtuelle et le métavers), elle a pour but et effet de créer un débat public permettant à d’autres acteurs que ceux des médias et du numérique de s’en emparer. Pour l’éducation aux médias et à l’information (EMI), les enjeux sont de taille en matière d’interactions sociales et scolaires, même s’il est encore trop tôt pour mesurer les conséquences sur l’enseignement de ces modèles de langage générant automatiquement des textes et des images et de leur mise à disposition auprès du grand public.

    Les publics de l’IA, notamment à l’école, se doivent donc de développer des connaissances et compétences autour des risques et opportunités de ce genre de robot dit conversationnel. Outre la compréhension des mécanismes du traitement automatique de l’information et de la désinformation, d’autres précautions prêtent à éducation :

    prendre garde au monopole de la requête en ligne, tel que visé par Bing Chat et Google Bard, en jouant de la concurrence entre elles, donc en utilisant régulièrement plusieurs moteurs de recherche ;

    exiger des labels, des codes couleur et autres marqueurs pour indiquer qu’un document a été produit par une IA ou avec son aide est aussi frappé au coin du bon sens et certains médias l’ont déjà anticipé ;

    demander que les producteurs fassent de la rétro-ingénierie pour produire des IA qui surveillent l’IA. Ce qui est déjà le cas avec GPTZero ;

    entamer des poursuites judiciaires, en cas d’« hallucination » de ChatGPT- - encore un terme anthropomorphisé pour marquer une erreur du système !

    Et se souvenir que, plus on utilise ChatGPT, sous sa version gratuite comme payante, plus on l’aide à s’améliorer.

    Dans le domaine éducatif, les solutions marketing de la EdTech vantent les avantages de l’IA pour personnaliser les apprentissages, faciliter l’analyse de données, augmenter l’efficacité administrative… Mais ces métriques et statistiques ne sauraient en rien se substituer à la validation des compétences acquises et aux productions des jeunes.

    Pour tout intelligente qu’elle prétende être, l’IA ne peut remplacer la nécessité pour les élèves de développer leur esprit critique et leur propre créativité, de se former et s’informer en maîtrisant leurs sources et ressources. Alors que la EdTech, notamment aux États-Unis, se précipite pour introduire l’IA dans les classes, d’école primaire au supérieur, la vigilance des enseignants et des décideurs reste primordiale pour préserver les missions centrales de l’école et de l’université. L’intelligence collective peut ainsi s’emparer de l’intelligence artificielle.

    #Intelligence_artificielle #EMI #Education_medias_information

  • Discrimination 2.0 : ces algorithmes qui perpétuent le racisme

    L’IA et les systèmes algorithmiques peuvent désavantager des personnes en raison de leur origine, voire conduire à des discriminations raciales sur le marché du travail. A l’occasion de la Journée internationale pour l’élimination de la discrimination raciale, AlgorithmWatch CH, humanrights.ch et le National Coalition Building Institute NCBI mettent en lumière la manière dont les systèmes automatisés utilisés dans les procédures de recrutement peuvent reproduire les inégalités.

    Les procédures d’embauche sont et ont toujours été caractérisées par une certaine inégalité des chances. Aujourd’hui, les entreprises utilisent souvent des systèmes algorithmiques pour traiter les candidatures, les trier et faire des recommandations pour sélectionner des candidat·e·x·s. Si les départements des ressources humaines des grandes entreprises souhaitent augmenter leur efficacité grâce aux « Applicant Tracking Systems » (ATS), l’utilisation de tels systèmes peut renforcer les stéréotypes discriminatoires ou même en créer de nouveaux. Les personnes issues de l’immigration sont souvent concernées par cette problématique.
    Exemple 1 : un algorithme qui préfère les CV « indigènes »

    Une étude récente menée en Grande-Bretagne a comparé les CV sélectionnés par une personne experte en ressources humaines et ceux qu’un système de recommandation algorithmique avait identifiés comme étant ceux de candidat·e·x·s compétent·e·x·s. La comparaison a montré que les personnes que les recruteur·euse·x·s considéraient comme les meilleur·e·x·s candidat·e·x·s ne faisaient parfois même pas partie de la sélection effectuée par les systèmes basés sur des algorithmes. Ces systèmes ne sont pas capables pas lire tous les formats avec la même efficacité ; aussi les candidatures compétentes ne correspondant pas au format approprié sont-elles automatiquement éliminées. Une étude portant sur un autre système a également permis de constater des différences claires dans l’évaluation des CV. Ainsi, il s’est avéré que le système attribuait davantage de points aux candidatures « indigènes », en l’occurrence britanniques, qu’aux CV internationaux. Les candidat·e·x·s britanniques avaient donc un avantage par rapport aux personnes migrantes ou ayant une origine étrangère pour obtenir une meilleure place dans le classement.
    Exemple 2 : les formations à l’étranger moins bien classées

    En règle générale, les systèmes de recrutement automatisés sont entraînés de manière à éviter l’influence de facteurs tels que le pays d’origine, l’âge ou le sexe sur la sélection. Les candidatures contiennent toutefois également des attributs plus subtils, appelés « proxies » (en français : variables de substitution), qui peuvent indirectement donner des informations sur ces caractéristiques démographiques, par exemple les compétences linguistiques ou encore l’expérience professionnelle ou les études à l’étranger. Ainsi, la même étude a révélé que le fait d’avoir étudié à l’étranger entraînait une baisse des points attribués par le système pour 80% des candidatures. Cela peut conduire à des inégalités de traitement dans le processus de recrutement pour les personnes n’ayant pas grandi ou étudié dans le pays dans lequel le poste est proposé.

    Les critères de sélection de nombreux systèmes de recrutement basés sur les algorithmes utilisés par les entreprises sont souvent totalement opaques. De même, les jeux de données utilisés pour entraîner les algorithmes d’auto-apprentissage se basent généralement sur des données historiques. Si une entreprise a par exemple jusqu’à présent recruté principalement des hommes blancs âgés de 25 à 30 ans, il se peut que l’algorithme « apprenne » sur cette base que de tels profils doivent également être privilégiés pour les nouveaux postes à pourvoir. Ces stéréotypes et effets discriminatoires ne viennent pas de l’algorithme lui-même, mais découlent de structures ancrées dans notre société ; ils peuvent toutefois être répétés, repris et donc renforcés par l’algorithme.

    Ces exemples illustrent la discrimination par les algorithmes de personnes sur la base de leur origine. Les algorithmes discriminent également de nombreux autres groupes de population. En Suisse aussi, de plus en plus d’entreprises font usage d’algorithmes pour leurs processus de recrutement ainsi que sur le lieu de travail.

    Discrimination algorithmique en Suisse : le cadre légal de protection contre la discrimination en Suisse ne protège pas suffisamment contre la discrimination par les systèmes algorithmiques et doit être renforcé. Ce papier de position présente les problématiques liées à la discrimination algorithmique et décrit les moyens d’améliorer la protection contre ce type de discrimination.

    Les algorithmes discriminent également de nombreux autres groupes de population. Dans la série « Discrimination 2.0 : ces algorithmes qui discriminent », AlgorithmWatch CH et humanrights.ch, en collaboration avec d’autres organisations, mettent en lumière divers cas de discrimination algorithmique.

    https://www.humanrights.ch/fr/nouvelles/discrimination-20-algorithmes-perpetuent-racisme
    #discrimination #racisme #algorithme #xénophobie #IA #AI #intelligence_artificielle #travail #recrutement #discrimination_raciale #inégalités #ressources_humaines #Applicant_Tracking_Systems (#ATS) #CV #curriculum_vitae #sélection #tri

    • « L’IA et les systèmes algorithmiques peuvent désavantager des personnes en raison de leur origine, voire conduire à des discriminations raciales sur le marché du travail. » mais l’ia et les systemes algorithmiques peuvent tout aussi bien avantager des personnes en raison de leur origine, voire conduire à des discriminations raciales sur le marché du travail. La banque mondiale exige déja une discrimination selon les pratiques sexuelles pour favoriser emprunts et subventions !

  • Belgian beer study acquires taste for machine learning • The Register
    https://www.theregister.com/2024/03/27/belgian_beer_machine_learning

    Researchers reckon results could improve recipe development for food and beverages
    icon
    Lindsay Clark
    Wed 27 Mar 2024 // 11:45 UTC

    Joining the list of things that probably don’t need improving by machine learning but people are going to try anyway is Belgian beer.

    The ale family has long been a favorite of connoisseurs worldwide, yet one group of scientists decided it could be brewed better with the assistance of machine learning.

    In a study led by Michiel Schreurs, a doctoral student at Vlaams Instituut voor Biotechnologie (VIB) in Flanders, the researchers wanted to help develop new alcoholic and non-alcoholic beer flavors with higher rates of consumer appreciation.

    Understanding the relationship between beer chemistry and its taste can be a tricky task. Much of the work is trial and error and relies on extensive consumer testing.

    #Intelligence_artificielle #Bière #Bullshit #Statistiques_fantasques

  • #Wauquiez veut surveiller les #trains et #lycées régionaux avec l’#intelligence_artificielle

    #Laurent_Wauquiez a fait voter le déploiement de la #vidéosurveillance_algorithmique dans tous les lycées et trains d’#Auvergne-Rhône-Alpes, profitant de l’#expérimentation accordée aux #Jeux_olympiques de Paris.

    Laurent Wauquiez savoure. « Nous avions pris position sur la vidéosurveillance pendant la campagne des régionales. Depuis, les esprits ont bougé », sourit le président de la région Auvergne-Rhône-Alpes, en référence à l’expérimentation de la #vidéosurveillance algorithmique (#VSA) accordée dans le cadre des Jeux olympiques de Paris. Surfant sur l’opportunité, il a fait voter jeudi 21 mars en Conseil régional sa propre expérimentation de vidéosurveillance « intelligente » des lycées et des trains d’Auvergne-Rhône-Alpes.

    L’ancien patron des Républicains (LR) justifie cette avancée technosécuritaire par l’assassinat du professeur #Dominique_Bernard dans un lycée d’Arras en octobre. Pour l’élu, cette tragédie « confirme la nécessité de renforcer la #sécurité des lycées ».

    Reste que cette expérimentation n’est pour l’instant pas légale. Laurent Wauquiez va demander au Premier ministre, Gabriel Attal, la permission d’élargir la loi pour couvrir les lycées et les transports régionaux. « L’expérimentation des JO est faite pour tester ce qui sera appliqué. Il faut en profiter », défend Renaud Pfeffer, vice-président délégué à la sécurité de la région Auvergne-Rhône-Alpes.

    Selon la délibération votée par le Conseil régional, cette #technologie qui combine vidéosurveillance et intelligence artificielle peut détecter huit types d’événements prédéterminés : « le non-respect du sens de circulation, le franchissement d’une zone interdite, la présence ou l’utilisation d’une arme, un départ de feu, un mouvement de foule, une personne au sol, une densité trop importante, un colis abandonné. » Les événements sont ensuite vérifiés par un agent, qui décide des mesures à prendre.

    L’expérimentation doit durer deux ans

    L’exécutif régional promet d’utiliser cette vidéosurveillance algorithmique « sans mettre en œuvre de reconnaissance faciale, ni d’identification de données biométriques [qui permettent d’identifier une personne]. » « On est sur du situationnel, pas sur de l’individu », insiste #Renaud_Pfeffer. Des promesses auxquelles ne croit pas Marne Strazielle, directrice de la communication de l’association de défense des droits et libertés sur internet La Quadrature du net. « En réalité, l’#algorithme identifie des actions qui peuvent être rattachées à son auteur », insiste-t-elle.

    Cette expérimentation est programmée pour durer deux ans dans les trains, #gares, lycées et #cars_scolaires. Les flux vidéos seront examinés au #Centre_régional_de_surveillance_des_transports (#CRST) aménagé en gare de Lyon Part-Dieu. « Les caméras sont prêtes », assure Renaud Pfeffer. Depuis son arrivée à la tête de la Région en 2016, Laurent Wauquiez l’a généreusement équipée en vidéosurveillance : 129 gares sont surveillées par 2 300 caméras, dont les images sont visionnées en temps réel au CRST. 285 lycées, 750 cars et la totalité des rames ferroviaires sont également équipés.

    « L’illusion d’avoir une approche pratique de l’insécurité »

    Pour défendre son projet, l’exécutif régional s’appuie sur la loi du 19 mai 2023, adoptée pour les Jeux olympiques de Paris et qui autorise l’expérimentation à grande échelle de la VSA par la police nationale jusqu’au 31 mars 2025. « On n’a le droit à la sécurité que pendant les Jeux olympiques et que à Paris ? On ne peut pas tester [la VSA] pour nos enfants, contre les problèmes de drogue ? », s’offusque Laurent Wauquiez.

    « Cette technologie permet aux décideurs politiques d’offrir l’illusion d’avoir une approche pratique de l’insécurité car ils mettent en place un dispositif, dénonce Marne Strazielle. Mais ce n’est pas parce qu’on enregistre et détecte une action dans l’espace public qu’elle va moins se produire. Souvent, cela ne fait que déplacer le problème. Il faut faire le travail de comprendre pourquoi elle s’est produite et comment la réduire. »

    La #Commission_nationale_de_l’informatique_et_des_libertés (#Cnil), qui n’a pas été consultée par l’équipe de Laurent Wauquiez, rappelle à Reporterre sa position de principe, qui « considère que la mise en œuvre de caméras augmentées conduit fréquemment à limiter les droits des personnes filmées ». Pour l’autorité administrative indépendante, « le déploiement de ces dispositifs dans les espaces publics, où s’exercent de nombreuses libertés individuelles (liberté d’aller et venir, d’expression, de réunion, droit de manifester, liberté de culte, etc.), présente incontestablement des risques pour les droits et libertés fondamentaux des personnes et la préservation de leur anonymat ».

    https://reporterre.net/Wauquiez-veut-surveiller-les-trains-et-lycees-regionaux-avec-l-intellige
    #surveillance #IA #AI #France #JO #JO_2024

    • La région #AURA vote le déploiement de la VSA dans les gares et les lycées

      Il en rêvait, il l’a fait. Un article de Reporterre nous apprend que Laurent Wauquiez a fait voter jeudi 21 mars en Conseil régional, le déploiement de la vidéosurveillance algorithmique dans tous les lycées et trains d’Auvergne-Rhône-Alpes, profitant de l’expérimentation accordée aux Jeux olympiques de Paris.

      Actuellement 129 gares seraient surveillées par 2 300 caméras, dont les images sont visionnées en temps réel au CRST. 285 lycées, 750 cars et la totalité des rames ferroviaires seraient également équipés.

      Selon la délibération votée par le Conseil régional, l’équipement de ces caméras avec la vidéosurveillance automatisée pourra détecter huit types d’événements prédéterminés : « le non-respect du sens de circulation, le franchissement d’une zone interdite, la présence ou l’utilisation d’une arme, un départ de feu, un mouvement de foule, une personne au sol, une densité trop importante, un colis abandonné. ». Les événements seront ensuite vérifiés par un agent, qui décidera des mesures à prendre.

      L’exécutif régional promet d’utiliser cette vidéosurveillance algorithmique « sans mettre en œuvre de reconnaissance faciale, ni d’identification de données biométriques [qui permettent d’identifier une personne]. » Cependant, comme l’a très bien démontré la Quadrature du Net, la VSA implique nécessairement une identification biométrique.

      La VSA et la reconnaissance faciale reposent sur les mêmes algorithmes d’analyse d’images, la seule différence est que la première isole et reconnaît des corps, des mouvements ou des objets, lorsque la seconde détecte un visage.

      La VSA est capable de s’intéresser à des « événements » (déplacements rapides, altercations, immobilité prolongée) ou aux traits distinctifs des personnes : une silhouette, un habillement, une démarche, grâce à quoi ils peuvent isoler une personne au sein d’une foule et la suivre tout le long de son déplacement dans la ville. La VSA identifie et analyse donc en permanence des données biométriques.

      « En réalité, l’algorithme identifie des actions qui peuvent être rattachées à son auteur » (Marne Strazielle, directrice de la communication de La Quadrature du net.)

      Ce sont généralement les mêmes entreprises qui développent ces deux technologies. Par exemple, la start-up française Two-I s’est d’abord lancé dans la détection d’émotion, a voulu la tester dans les tramways niçois, avant d’expérimenter la reconnaissance faciale sur des supporters de football à Metz. Finalement, l’entreprise semble se concentrer sur la VSA et en vendre à plusieurs communes de France. La VSA est une technologie biométrique intrinsèquement dangereuse, l’accepter c’est ouvrir la voie aux pires outils de surveillance.
      "Loi J.O. : refusons la surveillance biométrique", La Quadrature du Net

      Cela fait longtemps que M. Wauquiez projette d’équiper massivement cars scolaires et inter-urbains, gares et TER d’Auvergne-Rhône-Alpes en caméras et de connecter le tout à la reconnaissance faciale.

      En juin 2023, nous avions déjà sorti un article sur le sujet, au moment de la signature d’une convention entre la région Auvergne Rhône Alpes, le préfet et la SNCF, autorisant le transfert aux forces de sécurité, des images des caméras de vidéosurveillance de 129 gares sur les quelque 350 que compte la région AURA.

      Depuis fin 2023, il demande également d’utiliser à titre expérimental des "logiciels de reconnaissance faciale" aux abords des lycées pour pouvoir identifier des personnes "suivies pour radicalisation terroriste".

      Une mesure qui a déjà été reconnue comme illégale par la justice, comme l’a rappelé le media Reporterre. En 2019 un projet de mise en place de portiques de reconnaissance faciale à l’entrée de lycées à Nice et Marseille avait été contesté par La Quadrature du net et la LDH. La Commission nationale de l’informatique et des libertés (CNIL), qui avait déjà formulé des recommandations, a rendu à l’époque un avis qui jugeait le dispositif pas nécessaire et disproportionné.

      Mais cela qui n’arrêtera Laurent Wauquiez, celui-ci a déclaré qu’il allait demander au Premier ministre, Gabriel Attal, la permission d’élargir la loi pour couvrir les lycées et les transports régionaux...

      La CNIL, qui n’a pas été consultée par l’équipe de Laurent Wauquiez, a rappelé à Reporterre sa position de principe, qui « considère que la mise en œuvre de caméras augmentées conduit fréquemment à limiter les droits des personnes filmées ».

      Pour elle, « le déploiement de ces dispositifs dans les espaces publics, où s’exercent de nombreuses libertés individuelles (liberté d’aller et venir, d’expression, de réunion, droit de manifester, liberté de culte, etc.), présente incontestablement des risques pour les droits et libertés fondamentaux des personnes et la préservation de leur anonymat ».

      Des dizaines d’organisations, parmi lesquelles Human Rights Watch, ont adressé une lettre publique aux députés, les alertant sur le fait que les nouvelles dispositions créent un précédent inquiétant de surveillance injustifiée et disproportionnée dans les espaces publics, et menacent les droits fondamentaux, tels que le droit à la vie privée, la liberté de réunion et d’association, et le droit à la non-discrimination.

      Résistons à la #VSA et à la technopolice !


      https://halteaucontrolenumerique.fr/?p=5351

  • Five of this year’s Pulitzer finalists are AI-powered | Nieman Journalism Lab
    https://www.niemanlab.org/2024/03/five-of-this-years-pulitzer-finalists-are-ai-powered

    Two of journalism’s most prestigious prizes — the Pulitzers and the Polk awards — on how they’re thinking about entrants using generative AI.
    By Alex Perry March 11, 2024, 10:31 a.m.

    Five of the 45 finalists in this year’s Pulitzer Prizes for journalism disclosed using AI in the process of researching, reporting, or telling their submissions, according to Pulitzer Prize administrator Marjorie Miller.

    It’s the first time the awards, which received around 1,200 submissions this year, required entrants to disclose AI usage. The Pulitzer Board only added this requirement to the journalism category. (The list of finalists is not yet public. It will be announced, along with the winners, on May 8, 2024.)

    Miller, who sits on the 18-person Pulitzer board, said the board started discussing AI policies early last year because of the rising popularity of generative AI and machine learning.

    “AI tools at the time had an ‘oh no, the devil is coming’ reputation,” she said, adding that the board was interested in learning about AI’s capabilities as well as its dangers.

    Last July — the same month OpenAI struck a deal with the Associated Press and a $5 million partnership with the American Journalism Project — a Columbia Journalism School professor was giving the Pulitzer Board a crash course in AI with the help of a few other industry experts.

    Mark Hansen, who is also the director of the David and Helen Gurley Brown Institute for Media Innovation, wanted to provide the board with a broad base of AI usage in newsrooms from interrogating large datasets to writing code for web-scraping large language models.

    He and AI experts from The Marshall Project, Harvard Innovation Labs, and Center for Cooperative Media created informational videos about the basics of large language models and newsroom use cases. Hansen also moderated a Q&A panel featuring AI experts from Bloomberg, The Markup, McClatchy, and Google.

    Miller said the board’s approach from the beginning was always exploratory. They never considered restricting AI usage because they felt doing so would discourage newsrooms from engaging with innovative technology.

    “I see it as an opportunity to sample the creativity that journalists are bringing to generative AI, even in these early days,” said Hansen, who didn’t weigh in directly on the new awards guideline.

    While the group focused on generative AI’s applications, they spent substantial time on relevant copyright law, data privacy, and bias in machine learning models. One of the experts Hansen invited was Carrie J. Cai, a staff research scientist in Google’s Responsible AI division who specializes in human-computer interaction.

    #Journalisme #Intelligence_artificielle #Pulitzer

  • Ketty Introduces AI Book Designer: Revolutionizing Book Production
    https://www.robotscooking.com/ketty-ai-designer

    Effortless Book Design with AI

    The AI Book Designer introduces a groundbreaking approach to book design by allowing users to style and format their books using simple, intuitive commands. Users can say “make the book look modern”, “make the text more readable” or click on a chapter title and instruct the AI to “add this to the header,” and the changes are applied instantly. This eliminates the need for knowing complex design software or coding, making professional-grade design accessible to everyone.

    Where we are headed

    As we look towards the future development of the AI Book Designer, there are several ideas we are currently thinking about:

    AI-Generated Cover Designs: Generate a range of cover options based on user input.
    Collaborative AI Design: Enable multiple users to work on the same book design simultaneously (concurrently). This feature could be particularly useful for larger publishing teams or co-authored projects.
    AI-Assisted Image Management: Automatically apply styles and optimize the placement of images within the book layout.

    Join the Movement

    Coko believes AI has the potential to transform book production, making it accessible and efficient for everyone. By combining open-source code and principles with cutting-edge technology, Coko is paving the way for a new era of automated typesetting and book design.

    #Typographie #Coko #Intelligence_artificielle

  • Border security with drones and databases

    The EU’s borders are increasingly militarised, with hundreds of millions of euros paid to state agencies and military, security and IT companies for surveillance, patrols and apprehension and detention. This process has massive human cost, and politicians are planning to intensify it.

    Europe is ringed by steel fences topped by barbed wire; patrolled by border agents equipped with thermal vision systems, heartbeat detectors, guns and batons; and watched from the skies by drones, helicopters and planes. Anyone who enters is supposed to have their fingerprints and photograph taken for inclusion in an enormous biometric database. Constant additions to this technological arsenal are under development, backed by generous amounts of public funding. Three decades after the fall of the Berlin Wall, there are more walls than ever at Europe’s borders,[1] and those borders stretch ever further in and out of its territory. This situation is the result of long-term political and corporate efforts to toughen up border surveillance and controls.

    The implications for those travelling to the EU depend on whether they belong to the majority entering in a “regular” manner, with the necessary paperwork and permissions, or are unable to obtain that paperwork, and cross borders irregularly. Those with permission must hand over increasing amounts of personal data. The increasing automation of borders is reliant on the collection of sensitive personal data and the use of algorithms, machine learning and other forms of so-called artificial intelligence to determine whether or not an individual poses a threat.

    Those without permission to enter the EU – a category that includes almost any refugee, with the notable exception of those who hold a Ukrainian passport – are faced with technology, personnel and policies designed to make journeys increasingly difficult, and thus increasingly dangerous. The reliance on smugglers is a result of the insistence on keeping people in need out at any cost – and the cost is substantial. Thousands of people die at Europe’s borders every year, families are separated, and people suffer serious physical and psychological harm as a result of those journeys and subsequent administrative detention and social marginalisation. Yet parties of all political stripes remain committed to the same harmful and dangerous policies – many of which are being worsened through the new Pact on Migration and Asylum.[2]

    The EU’s border agency, Frontex, based in Warsaw, was first set up in 2004 with the aim of providing technical coordination between EU member states’ border guards. Its remit has been gradually expanded. Following the “migration crisis” of 2015 and 2016, extensive new powers were granted to the agency. As the Max Planck Institute has noted, the 2016 law shifted the agency from a playing “support role” to acting as “a player in its own right that fulfils a regulatory, supervisory, and operational role.”[3] New tasks granted to the agency included coordinating deportations of rejected refugees and migrants, data analysis and exchange, border surveillance, and technology research and development. A further legal upgrade in 2019 introduced even more extensive powers, in particular in relation to deportations, and cooperation with and operations in third countries.

    The uniforms, guns and batons wielded by Frontex’s border guards are self-evidently militaristic in nature, as are other aspects of its work: surveillance drones have been acquired from Israeli military companies, and the agency deploys “mobile radars and thermal cameras mounted on vehicles, as well as heartbeat detectors and CO2 monitors used to detect signs of people concealed inside vehicles.”[4] One investigation described the companies that have held lobbying meetings or attended events with Frontex as “a Who’s Who of the weapons industry,” with guests including Airbus, BAE Systems, Leonardo and Thales.[5] The information acquired from the agency’s surveillance and field operations is combined with data provided by EU and third country agencies, and fed into the European Border Surveillance System, EUROSUR. This offers a God’s-eye overview of the situation at Europe’s borders and beyond – the system also claims to provide “pre-frontier situational awareness.”

    The EU and its member states also fund research and development on these technologies. From 2014 to 2022, 49 research projects were provided with a total of almost €275 million to investigate new border technologies, including swarms of autonomous drones for border surveillance, and systems that aim to use artificial intelligence to integrate and analyse data from drones, satellites, cameras, sensors and elsewhere for “analysis of potential threats” and “detection of illegal activities.”[6] Amongst the top recipients of funding have been large research institutes – for example, Germany’s Fraunhofer Institute – but companies such as Leonardo, Smiths Detection, Engineering – Ingegneria Informatica and Veridos have also been significant beneficiaries.[7]

    This is only a tiny fraction of the funds available for strengthening the EU’s border regime. A 2022 study found that between 2015 and 2020, €7.7 billion had been spent on the EU’s borders and “the biggest parts of this budget come from European funding” – that is, the EU’s own budget. The total value of the budgets that provide funds for asylum, migration and border control between 2021-27 comes to over €113 billion[8]. Proposals for the next round of budgets from 2028 until 2035 are likely to be even larger.

    Cooperation between the EU, its member states and third countries on migration control comes in a variety of forms: diplomacy, short and long-term projects, formal agreements and operational deployments. Whatever form it takes, it is frequently extremely harmful. For example, to try to reduce the number of people arriving across the Mediterranean, member states have withdrawn national sea rescue assets (as deployed, for example, in Italy’s Mare Nostrum operation) whilst increasing aerial surveillance, such as that provided by the Israel-produced drones operated by Frontex. This makes it possible to observe refugees attempting to cross the Mediterranean, whilst outsourcing their interception to authorities from countries such as Libya, Tunisia and Egypt.

    This is part of an ongoing plan “to strengthen coordination of search and rescue capacities and border surveillance at sea and land borders” of those countries. [9] Cooperation with Tunisia includes refitting search and rescue vessels and providing vehicles and equipment to the Tunisian coastguard and navy, along with substantial amounts of funding. The agreement with Egypt appears to be structured along similar lines, and five vessels have been provided to the so-called Libyan Coast Guard in 2023.[10]

    Frontex also plays a key role in the EU’s externalised border controls. The 2016 reform allowed Frontex deployments at countries bordering the EU, and the 2019 reform allowed deployments anywhere in the world, subject to agreement with the state in question. There are now EU border guards stationed in Albania, Montenegro, Serbia, Bosnia and Herzegovina, and North Macedonia.[11] The agency is seeking agreements with Niger, Senegal and Morocco, and has recently received visits from Tunisian and Egyptian officials with a view to stepping up cooperation.[12]

    In a recent report for the organisation EuroMed Rights, Antonella Napolitano highlighted “a new element” in the EU’s externalisation strategy: “the use of EU funds – including development aid – to outsource surveillance technologies that are used to entrench political control both on people on the move and local population.” Five means of doing so have been identified: provision of equipment; training; financing operations and procurement; facilitating exports by industry; and promoting legislation that enables surveillance.[13]

    The report highlights Frontex’s extended role which, even without agreements allowing deployments on foreign territory, has seen the agency support the creation of “risk analysis cells” in a number of African states, used to gather and analyse data on migration movements. The EU has also funded intelligence training in Algeria, digital evidence capacity building in Egypt, border control initiatives in Libya, and the provision of surveillance technology to Morocco. The European Ombudsman has found that insufficient attention has been given to the potential human rights impacts of this kind of cooperation.[14]

    While the EU and its member states may provide the funds for the acquisition of new technologies, or the construction of new border control systems, information on the companies that receive the contracts is not necessarily publicly available. Funds awarded to third countries will be spent in accordance with those countries’ procurement rules, which may not be as transparent as those in the EU. Indeed, the acquisition of information on the externalisation in third countries is far from simple, as a Statewatch investigation published in March 2023 found.[15]

    While EU and member state institutions are clearly committed to continuing with plans to strengthen border controls, there is a plethora of organisations, initiatives, campaigns and projects in Europe, Africa and elsewhere that are calling for a different approach. One major opportunity to call for change in the years to come will revolve around proposals for the EU’s new budgets in the 2028-35 period. The European Commission is likely to propose pouring billions more euros into borders – but there are many alternative uses of that money that would be more positive and productive. The challenge will be in creating enough political pressure to make that happen.

    This article was originally published by Welt Sichten, and is based upon the Statewatch/EuroMed Rights report Europe’s techno-borders.

    Notes

    [1] https://www.tni.org/en/publication/building-walls

    [2] https://www.statewatch.org/news/2023/december/tracking-the-pact-human-rights-disaster-in-the-works-as-parliament-makes

    [3] https://www.mpg.de/14588889/frontex

    [4] https://www.theguardian.com/global-development/2021/dec/06/fortress-europe-the-millions-spent-on-military-grade-tech-to-deter-refu

    [5] https://frontexfiles.eu/en.html

    [6] https://www.statewatch.org/publications/reports-and-books/europe-s-techno-borders

    [7] https://www.statewatch.org/publications/reports-and-books/europe-s-techno-borders

    [8] https://www.statewatch.org/publications/reports-and-books/europe-s-techno-borders

    [9] https://www.statewatch.org/news/2023/november/eu-planning-new-anti-migration-deals-with-egypt-and-tunisia-unrepentant-

    [10] https://www.statewatch.org/media/4103/eu-com-von-der-leyen-ec-letter-annex-10-23.pdf

    [11] https://www.statewatch.org/analyses/2021/briefing-external-action-frontex-operations-outside-the-eu

    [12] https://www.statewatch.org/news/2023/november/eu-planning-new-anti-migration-deals-with-egypt-and-tunisia-unrepentant-, https://www.statewatch.org/publications/events/secrecy-and-the-externalisation-of-eu-migration-control

    [13] https://privacyinternational.org/challenging-drivers-surveillance

    [14] https://euromedrights.org/wp-content/uploads/2023/07/Euromed_AI-Migration-Report_EN-1.pdf

    [15] https://www.statewatch.org/access-denied-secrecy-and-the-externalisation-of-eu-migration-control

    https://www.statewatch.org/analyses/2024/border-security-with-drones-and-databases
    #frontières #militarisation_des_frontières #technologie #données #bases_de_données #drones #complexe_militaro-industriel #migrations #réfugiés #contrôles_frontaliers #surveillance #sécurité_frontalière #biométrie #données_biométriques #intelligence_artificielle #algorithmes #smugglers #passeurs #Frontex #Airbus #BAE_Systems #Leonardo #Thales #EUROSUR #coût #business #prix #Smiths_Detection #Fraunhofer_Institute #Engineering_Ingegneria_Informatica #informatique #Tunisie #gardes-côtes_tunisiens #Albanie #Monténégro #Serbie #Bosnie-Herzégovine #Macédoine_du_Nord #Egypte #externalisation #développement #aide_au_développement #coopération_au_développement #Algérie #Libye #Maroc #Afrique_du_Nord

  • Médicaments non délivrés, devis et facturation en panne… Une cyberattaque perturbe sérieusement le système de santé aux États-Unis Ingrid Vergara

    La cyberattaque d’une filiale de la plus importante compagnie d’assurance-santé américaine tourne à la crise d’ampleur aux États-Unis. Victime d’un rançongiciel qui affecte une de ses divisions depuis le 21 février, le groupe UnitedHealthcare n’est plus en mesure d’assurer de nombreuses tâches nécessaires au bon fonctionnement du système de santé. Des médecins qui ne peuvent plus savoir si un patient bénéficie ou non d’une assurance-santé, des pharmacies incapables de transmettre les demandes de remboursement de patients, des factures d’hôpitaux non réglées, des retards dans les délivrances d’ordonnances de médicaments…

    Les réactions en chaîne s’étendent et s’aggravent au fur et à mesure que les jours passent. Car UnitedHealthcare est la plus grande plateforme d’échange de paiements entre médecins, pharmacies, prestataires de soins de santé et patients du système de santé américain. Sa filiale Change gère la facturation de quelque 67.000 pharmacies, . . . . .

    #Santé #internet #sécurité_informatique #cyberattaques #cybersécurité #malware #usa #UnitedHealthcare #algorithme #juste_à_temps #dématérialisation #intelligence_artificielle #artificial-intelligence #blockchain #IA

    Source et suite (payante) : https://www.lefigaro.fr/secteur/high-tech/medicaments-non-delivres-devis-et-facturation-en-panne-une-cyberattaque-per

  • ASMA MHALLA

    Les géants de la tech déploient une vision du monde cohérente, comme dans toute folie

    Dans l’essai « Technopolitique », la chercheuse à l’EHESS analyse la façon dont les grandes entreprises technologiques, avec leurs innovations ultrasophistiquées, redessinent les rapports de pouvoir avec l’Etat. Ce qui rend nécessaire une réponse démocratique.

    https://www.liberation.fr/idees-et-debats/asma-mhalla-les-geants-de-la-tech-deploient-une-vision-du-monde-coherente

    https://justpaste.it/bpo3a

  • Intelligence artificielle et traduction, l’impossible dialogue ?
    https://actualitte.com/article/115941/auteurs/intelligence-artificielle-et-traduction-l-impossible-dialogue

    Le recours à ces outils par des éditeurs n’est pas si exceptionnel, si bien que les professionnels craignent d’être relégués au rang de relecteurs des travaux de la machine.

    L’Association des traducteurs littéraires de France (ATLF) et l’Association pour la promotion de la traduction littéraire (ATLAS) rappelaient, en début d’année 2023 : « Tous ceux et celles qui pensent la traduction ou qui l’ont pratiquée le savent : on ne traduit pas des mots, mais une intention, des sous-entendus, l’équivoque, ce qui n’est pas dit et pourtant existe dans les plis d’un texte littéraire. »

    #Edition #Traduction #Intelligence_artificielle

  • Glüxkind AI Stroller - The Best Smart Stroller for your family
    https://gluxkind.com

    Autant de conneries en si peu d’espace... pas mal !!!
    Les enfants, nouvelle victimes de la prédation algorithmique.

    Unlock Your Helping hand

    Designed by parents with sky-high standards, our AI Powered Smart Strollers elevates the happy moments of daily parenting and lightens the stressful ones.

    Feel supported and savour moments of peace and quiet with features like Automatic Rock-My-Baby or the built-in White Noise Machine to help soothe your little one.

    #Poussette #Intelligence_artificielle #Parentalité

  • One big thing missing from the AI conversation | Zeynep Tufekci - GZERO Media
    https://www.gzeromedia.com/gzero-world-clips/one-big-thing-missing-from-the-ai-conversation-zeynep-tufekci

    When deployed cheaply and at scale, artificial intelligence will be able to infer things about people, places, and entire nations, which humans alone never could. This is both good and potentially very, very bad.

    If you were to think of some of the most overlooked stories of 2023, artificial intelligence would probably not make your list. OpenAI’s ChatGPT has changed how we think about AI, and you’ve undoubtedly read plenty of quick takes about how AI will save or destroy the planet. But according to Princeton sociologist Zeynep Tufekci, there is a super important implication of AI that not enough people are talking about.

    “Rather than looking at what happens between you and me if we use AI,” Tufekci said to Ian on the sidelines of the Paris Peace Forum, “What I would like to see discussed is what happens if it’s used by a billion people?” In a short but substantive interview for GZERO World, Tufekci breaks down just how important it is to think about the applications of AI “at scale” when its capabilities can be deployed cheaply. Tufekci cites the example of how AI could change hiring practices in ways we might not intend, like weeding out candidates with clinical depression or with a history of unionizing. AI at scale will demonstrate a remarkable ability to infer things that humans cannot, Tufekci explains.

    #Intelligence_artificielle #Zeynep_Tufekci

  • Le coût du travail humain reste généralement inférieur à celui de l’IA, d’après le MIT FashionNetwork.com ( Bloomberg )

    Pour le moment, employer des humains reste plus économique que de recourir à l’intelligence artificielle pour la majorité des emplois. C’est en tout cas ce qu’affirme une étude menée par le Massachusetts Institute of Technology, alors que bon nombre de secteurs se sentent menacés par les progrès faits par l’IA.

    Il s’agit de l’une des premières études approfondies réalisées à ce sujet. Les chercheurs ont établi des modèles économiques permettant de calculer l’attractivité de l’automatisation de diverses tâches aux États-Unis, en se concentrant sur des postes “digitalisables“, comme l’enseignement ou l’intermédiation immobilière.

    Source : https://fr.fashionnetwork.com/news/premiumContent,1597548.html

    #ia #intelligence_artificielle #algorithme #surveillance #ai #google #technologie #facebook #technologisme #travail #biométrie #bigdata #coût #MIT

  • Test Yourself : Which Faces Were Made by A.I.? - The New York Times
    https://www.nytimes.com/interactive/2024/01/19/technology/artificial-intelligence-image-generators-faces-quiz.html

    Faites le test (j’ai seulement eu 40% de bonnes réponses !!!).

    Distinguishing between a real versus an A.I.-generated face has proved especially confounding.

    Research published across multiple studies found that faces of white people created by A.I. systems were perceived as more realistic than genuine photographs of white people, a phenomenon called hyper-realism.

    Researchers believe A.I. tools excel at producing hyper-realistic faces because they were trained on tens of thousands of images of real people. Those training datasets contained images of mostly white people, resulting in hyper-realistic white faces. (The over-reliance on images of white people to train A.I. is a known problem in the tech industry.)

    The confusion among participants was less apparent among nonwhite faces, researchers found.

    Participants were also asked to indicate how sure they were in their selections, and researchers found that higher confidence correlated with a higher chance of being wrong.

    “We were very surprised to see the level of over-confidence that was coming through,” said Dr. Amy Dawel, an associate professor at Australian National University, who was an author on two of the studies.

    “It points to the thinking styles that make us more vulnerable on the internet and more vulnerable to misinformation,” she added.

    #Intelligence_artificielle #faux_portraits

  • Usbek & Rica - Détournement, droit d’auteur… 5 outils pour embrouiller les IA
    https://usbeketrica.com/fr/article/detournement-droit-d-auteur-5-outils-pour-embrouiller-les-ia

    « ChatGPT n’est qu’une version très poussée et à très grande échelle de tout ce qu’on sait faire depuis longtemps en termes d’apprentissage automatique. » Ainsi l’historien américain Fred Turner, auteur de plusieurs études primées sur l’impact des nouvelles technologies sur la culture américaine, résume-t-il dans nos colonnes son scepticisme face à l’emballement médiatique autour de l’intelligence artificielle.

    Il n’empêche : comme Fred Turner le reconnaît lui-même, l’émergence de ChatGPT, Dall-E et autres MidJourney achève de nous faire basculer dans un monde où cette « très grande échelle » change à peu près tout, notamment du point de vue de la création.

    Articles écrits par des robots, illustrations générées par quelques lignes de code… Derrière ces prouesses apparentes, on retrouve des algorithmes d’apprentissage automatique, bâtis à partir d’immenses bases de données en ligne plutôt banales, pas toujours protégées… et donc potentiellement faillibles. Pour envoyer balader ces systèmes, faire valoir leurs droits ou tout simplement sécuriser leurs données, certains ingénieurs bâtissent depuis quelques mois des outils en tout genre, du site amateur au logiciel professionnel. Nous en avons recensé cinq.

    #Fred_Turner #Intelligence_artificielle

  • A “robot” should be chemical, not steel, argues man who coined the word | Ars Technica
    https://arstechnica.com/information-technology/2024/01/a-robot-should-be-chemical-not-steel-argues-man-who-coined-the-word

    In 1921, Czech playwright Karel Čapek and his brother Josef invented the word “robot” in a sci-fi play called R.U.R. (short for Rossum’s Universal Robots). As Even Ackerman in IEEE Spectrum points out, Čapek wasn’t happy about how the term’s meaning evolved to denote mechanical entities, straying from his original concept of artificial human-like beings based on chemistry.

    In a newly translated column called “The Author of the Robots Defends Himself,” published in Lidové Noviny on June 9, 1935, Čapek expresses his frustration about how his original vision for robots was being subverted. His arguments still apply to both modern robotics and AI. In this column, he referred to himself in the third-person:

    #Intelligence_artificielle #Robots #Karel_Capek

  • New MIT CSAIL study suggests that AI won’t steal as many jobs as expected | TechCrunch
    https://techcrunch.com/2024/01/22/new-mit-csail-study-suggests-that-ai-wont-steal-as-many-jobs-expected

    Will AI automate human jobs, and — if so — which jobs and when?

    That’s the trio of questions a new research study from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), out this morning, tries to answer.

    There’s been many attempts to extrapolate out and project how the AI technologies of today, like large language models, might impact people’s’ livelihoods — and whole economies — in the future.

    Goldman Sachs estimates that AI could automate 25% of the entire labor market in the next few years. According to McKinsey, nearly half of all work will be AI-driven by 2055. A survey from the University of Pennsylvania, NYU and Princeton finds that ChatGPT alone could impact around 80% of jobs. And a report from the outplacement firm Challenger, Gray & Christmas suggests that AI is already replacing thousands of workers.

    But in their study, the MIT researchers sought to move beyond what they characterize as “task-based” comparisons and assess how feasible it is that AI will perform certain roles — and how likely businesses are to actually replace workers with AI tech.

    Contrary to what one (including this reporter) might expect, the MIT researchers found that the majority of jobs previously identified as being at risk of AI displacement aren’t, in fact, “economically beneficial” to automate — at least at present.

    The key takeaway, says Neil Thompson, a research scientist at MIT CSAIL and a co-author on the study, is that the coming AI disruption might happen slower — and less dramatically — than some commentators are suggesting.

    Early in the study, the researchers give the example of a baker.

    A baker spends about 6% of their time checking food quality, according to the U.S. Bureau of Labor Statistics — a task that could be (and is being) automated by AI. A bakery employing five bakers making $48,000 per year could save $14,000 were it to automate food quality checks. But by the study’s estimates, a bare-bones, from-scratch AI system up to the task would cost $165,000 to deploy and $122,840 per year to maintain . . . and that’s on the low end.

    “We find that only 23% of the wages being paid to humans for doing vision tasks would be economically attractive to automate with AI,” Thompson said. “Humans are still the better economic choice for doing these parts of jobs.”

    Now, the study does account for self-hosted, self-service AI systems sold through vendors like OpenAI that only need to be fine-tuned to particular tasks — not trained from the ground up. But according to the researchers, even with a system costing as little as $1,000, there’s lots of jobs — albeit low-wage and multitasking-dependent — that wouldn’t make economic sense for a business to automate.

    #Intelligence_artificielle #travail

  • Le #règlement européen sur l’IA n’interdira pas la #surveillance_biométrique de masse

    Le 8 décembre 2023, les législateurs de l’#Union_européenne se sont félicités d’être parvenus à un accord sur la proposition de #règlement tant attendue relative l’intelligence artificielle (« #règlement_IA »). Les principaux parlementaires avaient alors assuré à leurs collègues qu’ils avaient réussi à inscrire de solides protections aux #droits_humains dans le texte, notamment en excluant la #surveillance_biométrique_de_masse (#SBM).

    Pourtant, malgré les annonces des décideurs européens faites alors, le règlement IA n’interdira pas la grande majorité des pratiques dangereuses liées à la surveillance biométrique de masse. Au contraire, elle définit, pour la première fois dans l’#UE, des conditions d’utilisation licites de ces systèmes. Les eurodéputés et les ministres des États membres de l’UE se prononceront sur l’acceptation de l’accord final au printemps 2024.

    L’UE entre dans l’histoire – pour de mauvaises raisons

    La coalition #Reclaim_Your_Face soutient depuis longtemps que les pratiques des SBM sont sujettes aux erreurs et risquées de par leur conception, et qu’elles n’ont pas leur place dans une société démocratique. La police et les autorités publiques disposent déjà d’un grand nombre de données sur chacun d’entre nous ; elles n’ont pas besoin de pouvoir nous identifier et nous profiler en permanence, en objectifiant nos #visages et nos #corps sur simple pression d’un bouton.

    Pourtant, malgré une position de négociation forte de la part du Parlement européen qui demandait l’interdiction de la plupart des pratiques de SBM, très peu de choses avaient survécu aux négociations du règlement relatif à l’IA. Sous la pression des représentants des #forces_de_l’ordre, le Parlement a été contraint d’accepter des limitations particulièrement faibles autour des pratiques intrusives en matière de SBM.

    L’une des rares garanties en la matière ayant apparemment survécu aux négociations – une restriction sur l’utilisation de la #reconnaissance_faciale a posteriori [par opposition à l’utilisation en temps réel] – a depuis été vidée de sa substance lors de discussions ultérieures dites « techniques » qui se sont tenues ces dernière semaines.

    Malgré les promesses des représentants espagnols en charge des négociations, qui juraient que rien de substantiel ne changerait après le 8 décembre, cette édulcoration des protections contre la reconnaissance faciale a posteriori est une nouvelle déception dans notre lutte contre la #société_de_surveillance.

    Quel est le contenu de l’accord ?

    D’après ce que nous avons pu voir du texte final, le règlement IA est une occasion manquée de protéger les #libertés_publiques. Nos droits de participer à une #manifestation, d’accéder à des soins de #santé_reproductive ou même de nous asseoir sur un #banc pourraient ainsi être menacés par une surveillance biométrique omniprésente de l’#espace_public. Les restrictions à l’utilisation de la reconnaissance faciale en temps réel et a posteriori prévues par la loi sur l’IA apparaissent minimes et ne s’appliqueront ni aux entreprises privées ni aux autorités administratives.

    Nous sommes également déçus de voir qu’en matière de « #reconnaissance_des_émotions » et les pratiques de #catégorisation_biométrique, seuls des cas d’utilisation très limités sont interdits dans le texte final, avec d’énormes lacunes.

    Cela signifie que le règlement IA autorisera de nombreuses formes de reconnaissance des émotions – telles que l’utilisation par la police de systèmes d’IA pour évaluer qui dit ou ne dit pas la #vérité – bien que ces systèmes ne reposent sur aucune base scientifique crédible. Si elle est adoptée sous cette forme, le règlement IA légitimera une pratique qui, tout au long de l’histoire, a partie liée à l’#eugénisme.

    Le texte final prévoit également d’autoriser la police à classer les personnes filmées par les caméras de #vidéosurveillance en fonction de leur #couleur_de_peau. Il est difficile de comprendre comment cela peut être autorisé étant donné que la législation européenne interdit normalement toute #discrimination. Il semble cependant que, lorsqu’elle est pratiquée par une machine, les législateurs considèrent de telles #discriminations comme acceptables.

    Une seule chose positive était ressorti des travaux techniques menés à la suite des négociations finales du mois de décembre : l’accord entendait limiter la reconnaissance faciale publique a posteriori aux cas ayant trait à la poursuite de crimes transfrontaliers graves. Bien que la campagne « Reclaim Your Face » ait réclamé des règles encore plus strictes en la matière, cela constituait un progrès significatif par rapport à la situation actuelle, caractérisée par un recours massif à ces pratiques par les États membres de l’UE.

    Il s’agissait d’une victoire pour le Parlement européen, dans un contexte où tant de largesses sont concédées à la surveillance biométrique. Or, les négociations menées ces derniers jours, sous la pression des gouvernements des États membres, ont conduit le Parlement à accepter de supprimer cette limitation aux #crimes_transfrontaliers graves tout en affaiblissant les garanties qui subsistent. Désormais, un vague lien avec la « #menace » d’un crime pourrait suffire à justifier l’utilisation de la #reconnaissance_faciale_rétrospective dans les espaces publics.

    Il semblerait que ce soit la #France qui ait mené l’offensive visant à faire passer au rouleau compresseur notre droit à être protégés contre les abus de nos données biométriques. À l’approche des #Jeux_olympiques et paralympiques qui se tiendront à Paris cet été, la France s’est battue pour préserver ou étendre les pouvoirs de l’État afin d’éradiquer notre anonymat dans les espaces publics et pour utiliser des systèmes d’intelligence artificielle opaques et peu fiables afin de tenter de savoir ce que nous pensons. Les gouvernements des autres États membres et les principaux négociateurs du Parlement n’ont pas réussi à la contrer dans cette démarche.

    En vertu du règlement IA, nous serons donc tous coupables par défaut et mis sous #surveillance_algorithmique, l’UE ayant accordé un blanc-seing à la surveillance biométrique de masse. Les pays de l’UE auront ainsi carte blanche pour renforcer la surveillance de nos visages et de nos corps, ce qui créera un précédent mondial à faire froid dans le dos.

    https://www.laquadrature.net/2024/01/19/le-reglement-europeen-sur-lia-ninterdira-pas-la-surveillance-biometriq
    #surveillance_de_masse #surveillance #intelligence_artificielle #AI #IA #algorithme

    voir aussi :
    https://seenthis.net/messages/1037288

  • Chile searches for those missing from Pinochet dictatorship with the help of artificial intelligence

    At the end of August, Chilean president Gabriel Boric launched the Search Plan for more than 1,000 Chileans. Today, old judicial documents, many typewritten, have been digitized to apply cutting edge technology and cross-reference data.

    On Monday 15 January, at the inauguration of the “Congress of the Future” in Santiago, President Gabriel Boric stated that artificial intelligence, the theme of the 13th version of the conference, “will play an important role in the search for our #missing detainees.” He was referring to the #Search_Plan to find over 1,000 individuals who were victims of the Augusto Pinochet dictatorship (1973-1990), which his Administration presented on August 30, 2023, on the eve of the September 11 commemoration of the 50th anniversary of the coup d’état that ousted Salvador Allende, the socialist president.

    The plan, spearheaded by Justice Minister Luis Cordero, is an initiative that is intended to become a permanent State policy. According to Justice data, after the dictatorship in Chile there were 1,469 victims of forced disappearance and of these, 1,092 are missing detainees, while 377, who were executed, are missing as well. So far only 307 have been identified.

    To embark on this new search, which has already been initiated by the courts, Cordero tells EL PAÍS that he is working with two main sources. On the one hand, the judicial investigations, which comprise millions of pages. And on the other, the administrative records of the cases that are scattered around state agencies. These include the Human Rights Program, created in 1997, which falls under the Ministry of Justice, as well as previous investigations in military Prosecutor’s Offices (which used to close the cases) and the files that provided the basis for the 1991 National Commission for Truth and Reconciliation Report, driven by the former president, Patricio Aylwin (1990-1994), and in which an account of the victims was given for the first time.

    Typewritten documents

    Unsholster, a company specialized in data analysis, data science and software development, whose general manager is the civil engineer Antonio Díaz-Araujo, is behind the technological analysis of the information. The Human Rights Program has already digitalized the information, while the Judicial Branch is 80% digitalized. The firm was awarded the project in a bidding process in the context of the Search Plan — it is in charge of the implementation of artificial intelligence.

    Something of relevance in this investigation is that the judicial files, separated according to each case, were processed in the old Chilean justice system (changed in 2005), which implies that the judges’ inquiries are on paper — most of them have the pages sewn into a notebook by hand, written on typewriters, and there are even several handwritten parts. These are the ones containing statements, black and white photographs, photocopies of photos, forensic reports and old police reports.

    However, in addition, the judicial inquiries that have been undertaken since 2000 will provide a more up-to-date and crucial basis of information in the analysis. Since then, hundreds of cases that had been shelved during the dictatorship have been reopened by judges with exclusive dedication to cases of human rights violations with sentences.

    Cordero points out that “there is a lot of information in the hands of the State and there is no human capacity to process it, because it needs to be interconnected. For example, there are testimonies that appear in some files and not in others. And, in addition, depending on the judges, there were lines of investigation, so there may have been precedents that were useful for some and not for others.” For this reason, the justice minister says artificial intelligence can play a key role, as he believes that in these cases, the cross-referencing of information will be crucial.

    “All that information is in judicial and administrative files, and what digitization accomplishes first is to integrate them in one place. And then to work with artificial intelligence, which allows us to reduce the investigation gaps using algorithms, which are being tested, and which can read, for example, dates, names, places, for instance, in those files,” the minister adds.
    4.7 million pages and counting

    Unsholster is currently in the pre-project stage, before it starts programming, Díaz-Araujo explains to EL PAÍS. “But we have already touched on most of the file types that we will need to deal with,” he says. The documents that have been coming in, scanned sheet by sheet, are in folders, in PDF format, and therefore do not correspond to a logic that allows data to be searched because they are recorded as images. For this reason, the first step has been to start applying OCR (Optical Character Recognition) technology so that they can be transformed into data.

    They already have information — which does not yet include the thousands of files of the Judicial Branch — totaling 46,768 PDF files, which amounts to more than 4.7 million pages. “If a person were to read every one of those pages, out loud and without understanding or relating facts, they would probably spend eight hours a day reading for 27 years,” explains the civil engineer.

    Once those files are moved to pages, Díaz-Araujo says, “a big classification tree is created, which allows you to classify pages that have images, manuscripts, typewritten pages, or Word-style files. And then you start to apply, on each one of them, the best OCR” for each type of page, because the key, he adds, lies in “what material is brought to each one.”

    Another stage, he explains, is to create different types of dictionaries and entities “that can be learned with use.” For example, nicknames of people, places, streets (many have changed names since the dictatorship), ways of writing and dates.

    This implies, he says, creating a topology of entities in the reading, using technology, of each of the texts “that is capable of rapidly correlating different pages, people, places and dates in a highly flexible way.” He gives an example: “Many of the offenders may have nicknames, and several of them may be written in different ways, but that doesn’t mean that they won’t be linked. What you do is create technology that is capable of suggesting other correlations to the analyst as they occur over time.”

    Therefore, he elaborates, “there is artificial intelligence in the classification of documents; there is high intelligence in transforming documents from an image to searchable data and then, there is a lot of it, in the creation of entities that enable the connection of some documents with others. And, finally, the most necessary thing in a platform is that it should be about the possibility of competing algorithms, with artificial intelligence or without, on this data. But it should not be bound to a technology, because the biggest issue is being open to new technologies of the future. If you keep it closed, it becomes a stumbling block.”

    He continues: “Another key point of this platform is that the original data, and the transformed data, are retained. But you can continue to create other data on top of that. There is no time machine that kind of freezes the ability to produce more algorithms and more information with new platforms in the future.”
    Contreras and Krassnoff

    Five months after technology was first applied to the nearly 47,000 documents of Unsholster’s Human Rights Program, it is already possible, thanks to the implementation of the initial OCR on the identification documents, to find thousands of mentions of at least four military officers who were part of Pinochet’s secret police, the feared DINA (National Intelligence Directorate).

    Manuel Contreras, its director general, sentenced at the time of his death in 2015 to 526 years in prison for hundreds of crimes, appears 2,800 times; Pedro Espinoza and Miguel Krassnoff, both serving sentences in Punta Peuco prison, 2,079 and 2,954 mentions, respectively. And Marcelo Moren Brito, who was the torturer of Ángela Jeria, the mother of former socialist president, Michelle Bachelet, 2,284 times.

    For now they are only mentions. But from now on, names, facts, dates and places can be linked and related, says Díaz-Araujo.

    https://english.elpais.com/international/2024-01-18/chile-searches-for-those-missing-from-pinochet-dictatorship-with-the

    #Chili #intelligence_artificielle #identification #fosses_communes #dictature #AI #IA

  • La #CSNP invite à mieux anticiper l’impact de l’#IA sur la société
    https://www.banquedesterritoires.fr/la-csnp-invite-mieux-anticiper-limpact-de-lia-sur-la-societe

    Après un premier avis en 2020 sur l’#intelligence_artificielle (IA) centré sur sa dimension économique, la commission supérieure du numérique et des postes (CSNP) revient sur le sujet dans son avis(https://csnp.fr/wp-content/uploads/2024/01/AVIS-N%C2%B02024-01-du-17-JANVIER-2024-pour-mieux-encadrer-lusage-de-lintellige) n°2024-01 du 17 janvier 2024. Les parlementaires insistent plus particulièrement sur les enjeux sociétaux de l’IA, le tsunami de l’#IA_générative étant passé par là. Signé de Mireille Clapot, députée de la Drôme et présidente de la CSNP, l’avis souligne l’urgence à réguler l’IA tout en invitant les pouvoirs publics à amortir ses effets sociétaux et à contribuer à l’émergence d’une IA frugale.

    #régulation #formation #administrations_publiques