The furore over the data that Cambridge Analytica, a data analysis firm, has acquired from Facebook – something in the order of 30 to 50 million peoples’ details, including their profiles and Likes – has accelerated in a remarkable way. On Saturday the Observer and the New York Times published new revelations about how it used that data in the US election to support Donald Trump’s campaign; by Monday evening, the UK’s Information Commissioner was seeking a warrant to search CA’s computers to find out what personal information it might have, and whether it was authorised to have it.

The data grab happened in 2014. A team of Cambridge academics had discovered that with the data of just a few of your “Likes”, they could infer other elements of your personality – including your likely political persuasion. CA set up a subsidiary with one of the academics, Aleksandr Kogan, who promoted a version of the quiz to American users in particular, and seeded it among people on the paid “Mechanical Turk” system. The sneaky element: the quiz asked for access to your data (such as Likes, relationship status, education, interests, and anything else that was “public” in your profile), and that of your friends. About 270,000 people took it; and with each person having an average of 150-200 “friends” on Facebook (where the word has less heft than in the physical world), you easily get to a dataset comprising 30 to 50 million people.

Given that user data, CA was also able to exploit Facebook’s “open graph” system of the time, which would let you find out even more about someone’s Facebook activity, if you had enough detail to ask its computers the correct question. (You couldn’t just ask for “all the details about anyone living in Texas”, but you could ask for details about anyone called Fred Schmoe who was interested in vintage cars and lived in Littletown, Texas.) CA duly sucked up all this data. Facebook, alerted by concerned outsiders, eventually closed off that access in 2015.

The data was only meant to be for academic use. But it crossed over and Cambridge Analytica had it available for use to target voters in the US. (This is why it’s tricky to call it a “data breach”. Facebook’s systems were working exactly as the company expected. But it alleges that Cambridge Analytica used that data in ways that it should not have. A good lawyer could certainly argue that is a “breach”.)

This is where there’s a lot of debate. Did Cambridge Analytica take part in the US election? Did the data that it had make any difference in that election? Ex-Facebookers are insistent that for all its claims about “psychographic targeting”, there’s no effect. The most consistent in this argument is Antonio Garcia Martinez, who worked at Goldman Sachs writing software for traders, then left to set up an ad-tech startup in San Francisco and was then hired by Facebook; there he headed the team trying to make its ad targeting work. (His memoir, Chaos Monkeys, is terrific.)

But Martinez is dubious – and says those he’s spoken to who work in the industry are too. “Most ad insiders express skepticism about Cambridge Analytica’s claims of having influenced the election, and stress the real-world difficulty of changing anyone’s mind about anything with mere Facebook ads, least of all deeply ingrained political views,” he wrote on Monday for Wired. There’s too many assumptions, he suggests: first, that having found the typical profile of people who vote Trump, you’d then be able to work backward from peoples’ profiles to figure out if they might vote for Trump; and second, that you could figure out the correct ads to change those peoples’ minds.

But there’s some sleight of hand going on here. Nobody is suggesting that you can change deeply ingrained political views with a few Facebook ads. It’s the shallow-rooted ones that are at risk. There’s also the “extremism effect” – as Professor Cass Sunstein of Harvard University has shown, in any online community, views tend towards extremes because of a combination of confirmation belief and peer pressure. Given that the US election was decided by a difference of just 77,000 votes out of 13.2 million cast in the three key states of Michigan, Pennsylvania and Wisconsin (with a million-vote swing across those states: 600,000 fewer for Clinton than Obama, 400,000 more for Trump than Romney), the question isn’t how big an effect this data and the messaging around it had. It’s how little would be needed to make a difference.

Meanwhile, Google is grappling with its own version of this “extremism” trend, with YouTube accused of driving people who start from reasonable videos being shown ever more extreme content on those subjects because it captures their attention. “Truth” doesn’t enter into it; notice that Google’s mission statement, to “organise the world’s information and make it accessible”, doesn’t include anything about separating accuracy from untruth.

All these revelations have triggered a splurge of stories explaining “how to stop Facebook seeing your data” and “how to delete your Facebook account”. But that attacks the problem from the wrong direction. Facebook has become essential for many people as a means to find new contacts, even to run businesses; it’s as important as having a presence on the wider web, and thus findable by Google, but with a more personal touch. Like it or not, Facebook underpins a lot of important communication between businesses and customers. Google too is most peoples’ de facto search tool; if it’s not found there, it may as well not exist.

What we need isn’t to delete our accounts; it’s for these companies to rein in their rapacious demands for our data. As the Analytica example shows, once our data leaves our devices, its future journey and application is essentially unknowable – but you probably won’t like it, unless you truly adore “targeted ads” that dog your web viewing, offering variants of that sofa you just looked at on a shopping site, or getting peculiar semi-political ads that try to get you to Like them.

We’re heading towards a watershed: the realisation that we’re giving the web giants such as Google and Facebook far more than they’re giving us. They offer us services; they take our attention and privacy. We aren’t allowed to argue over the terms, unless we just don’t cooperate. (That’s why I don’t use Google for searching, and barely ever visit Facebook.)

Now the two giants are being called to account. But their businesses are built around getting hold of data and trying to monetise it. Facebook’s bruising experience might persuade it to take a different approach. The problem is always that the mistakes happened in the past, and so executives can insist that they’ve learnt the lessons, made the changes, and do nothing.

What’s different this time is the ire being expressed by legislators on both sides of the Atlantic. Facebook might not want to change. But it could be forced to. And that’s the only way that it will change.

Charles Arthur is author of Cyber Wars: Hacks that shocked the business world, to be published in May. He was formerly technology editor of the Guardian.