Google is Tackling Misinformation Online
Google is undoubtably the most popular search engine, bringing users millions of results in seconds by typing in just a few vague words, but one of the issues is that whilst information is fast moving and ever changing, the reliable search results with actual information may not be shown right away.
The spread of misinformation on the internet has been going on for years, but with the Covid-19 pandemic and the 2020 US elections, false news and misinformation can prove to be dangerous, and cause many to not believe what the real facts are or how it can impact the world around them. There are groups and pages from people who believe everything on the news is wrong and seem to make up their own stories about what they believe is true, and with algorithms taking into account the same phrases and hashtags that people use, these groups of people will get their word across and the number of people of whom are sharing this unverified information will multiply constantly. This makes it difficult for social media companies to track down the sources and enables users to post and search whatever they want to hear.
Google is undoubtably the most popular search engine, bringing users millions of results in seconds by typing in just a few vague words, but one of the issues is that whilst information is fast moving and ever changing, the reliable search results with actual information may not be shown right away. This leads people to read titles and articles on topics that haven’t been completely debunked or verified. The tech goliath has been testing within the last week a feature in the US on their search engine that will have prompts to say “"it looks like these results are changing quickly," and "if this topic is new, it can sometimes take time for results to be added by reliable sources". This should help to control misinformation spreading when popular news topics are released and when more specific search queries are gaining interest fast, although Google has not yet announced what they consider to be reliable sources.
“It’s a great way of making people pause before they act on or spread information further,” Evelyn Douek, a researcher at Harvard who studies online speech said. “It doesn’t involve anyone making judgments about the truth or falsity of any story but just gives the readers more context. … In almost all breaking news contexts, the first stories are not the complete ones, and so it’s good to remind people of that.”
Search engines and social media sites has been trying to clamp down on how far conspiracy theories and misinformation spread, but they aren’t wanting to constantly remove people’s profiles and content, as they’re not keen on taking away free speech. But during the most recent events that people have spread false news such as the 2020 elections and the pandemic, companies found themselves shutting down popular accounts that were spreading unverified and completely false information. The issue with this is people will make more accounts and more content and use the fact that the media giants have taken down their content as another blast of not being able to spread what they call “the truth”. Therefore, the new prompts that are in testing from Google aim to allow people to stop and think and even research a bit more of what is actually true and hopefully curb the volume of unverified information.
Facebook is one of the main sources where people can post and share false news. They have been taking steps to counteract this, as many of you may have seen on any posts about the pandemic, or even some conspiracy theory posts, there are warning about misinformation, and that independent fact-checkers are deeming them as unverified and somewhat harmful to users that are taking in this misinformation. Many groups or people on Facebook that share these kinds of stories will be somewhat financially motivated and will have dodgy links or ads to get people to sign up to something or purchase something, which is another issue of its own. Facebooks has taken steps such as third-party fact checkers to help limit the spread, making it harder for people to buy ads on the platform through stricter enforcement of their policies, and updating their detection of fake and spam accounts on Facebook.
Along with Facebook, other social media companies have been making an effort to stomp out misinformation. Twitter had released new features ahead of the 2020 elections prompting users that information that was floating around was not yet verified. Google’s new feature that’s in beta testing is also building on their recent effort to help users with ‘search literacy’, basically assisting users with what context their queries are in and how to improve them to allow for better search results - in April of last year they released a feature letting users know when there aren’t enough decent matches for their search, and in February last year they rolled out an ‘about’ button next to search results that details a brief Wikipedia description of the site they are viewing, on a small variety of sites at the time.
“When anybody does a search on Google, we’re trying to show you the most relevant, reliable information we can,” said Danny Sullivan, a public liaison for Google Search. “But we get a lot of things that are entirely new.”
The new feature is very much in testing at the moment but could prove extremely useful for stopping the rampant spread of false news and misinformation once it has been rolled out across more sites and search results, and of course in other countries, too. All too often, people who have an idea of the fake information in their head, they will more than likely ignore warnings. But we will have to see how they update and develop this new feature.
Keep up-to-date with the latest tech industry insights, trends as well as information technologies, app development, and small business content with the Proteams Blog
Follow us on LinkedIn for updates on the latest tech news here