The government and regulators must step in and force internet platforms to prevent scams, dangerous products and fake reviews appearing on their sites, consumer group "Which?" has said.
The group said it was time to stop asking technology companies to make changes and instead introduce laws to better protect people.
Which? has published research in which 68 per cent of respondents said they had little or no trust that companies such as Amazon, eBay, Facebook and Google were taking effective steps to protect them from scams or fake products.
The research also found that 89 per cent of those asked said online customer reviews helped them to make decisions on buying products.
But only 6 per cent said they had a “great deal” of trust in online platforms taking meaningful steps to stop the spread of fake reviews, and 18 per cent said they did not trust the platforms to do so “at all”.
In response, the group has launched its #JustNotBuyingIt campaign, which is urging the government to make tech companies take responsibility for harm taking place on their sites.
It said it believed current legislation meant platforms did not have enough legal responsibility, so scammers and criminals could sell unsafe products and mislead people.
The group said there was not enough legal incentive to shut down these practices.
“Millions of consumers are being exposed every day to scams, dangerous products and fake reviews,” said Rocio Concha, Which? director of policy and advocacy.
“The world’s biggest tech companies have the ability to protect people from consumer harm but they are simply not taking enough responsibility.
“We are launching our new #JustNotBuyingIt campaign because it is time to stop just asking these platforms to do the right thing to protect consumers.
"Instead the government and regulators must now step in and make them take responsibility by putting the right regulations in place.”
In response to Which?, Amazon said it “strongly disagrees with these assertions, which misrepresent the facts”.
It said it had invested more than $700 million and employed more than 10,000 people to protect customers.
Amazon said it was “relentless” in its efforts and has built “robust programs and industry-leading tools” to ensure products were safe and reviews were genuine.
“Our powerful machine learning tools and skilled investigators analyse over 10 million reviews submissions weekly," it said.
"And last year, our teams proactively blocked more than 10 billion suspect listings for various forms of abuse, including non-compliance, before they were published to our store."
eBay said it had a “long-standing commitment to ensuring consumers have the confidence to shop online safely”, and that it used automatic filters to block unsafe listings, which it said stopped “six million unsafe listings” in 2020.
It said it had established a “regulatory portal” that enables “authorities, such as Trading Standards, to directly report and remove listings that did not comply with relevant laws and regulations”.
A Facebook representative said the company was “dedicating significant resources to tackle the industry-wide issue of online scams”.
They said it was working to detect scam advertisements, block advertisers and take legal action against them in some cases.
“While no enforcement is perfect, we continue to invest in new technologies and methods to protect people on our service from these scams," Facebook said.
“We have also donated £3m [$4.1m] to Citizens Advice to deliver a UK Scam Action Programme to both raise awareness of online scams and help victims,” Facebook said.
A Google representative said “protecting consumers and legitimate businesses operating in the financial sector was a priority”, and that it had been working with the Financial Conduct Authority on implementing new measures.
“Having now launched further restrictions requiring financial services advertisers to be authorised by the FCA with carefully controlled exceptions, we will be vigorously enforcing our new policy."
This week, Google and Facebook suggested that fraud does not fit within the draft Online Safety Bill being examined by MPs and peers, despite calls for it to be included.
The draft bill focuses on user-generated content such as child exploitation and terrorism, but campaigners, including consumer champion Martin Lewis, say scams should fall within its responsibility.
Google and Facebook told MPs they thought it would be a “challenge” to cover fraud in the new regulation because the techniques for user-generated content and scams were quite different.