Your website can now opt out of training Google’s Bard and future AIs

Large language models are trained on all kinds of data, most of which it seems was collected without anyone’s knowledge or consent. Now you have a choice whether to allow your web content to be used by Google as material to feed its Bard AI and any future models it decides to make.

It’s as simple as disallowing “User-Agent: Google-Extended” in your site’s robots.txt, the document that tells automated web crawlers what content they’re able to access.

Though Google claims to develop its AI in an ethical, inclusive way, the use case of AI training is meaningfully different than indexing the web.

“We’ve also heard from web publishers that they want greater choice and control over how their content is used for emerging generative AI use cases,” the company’s VP of Trust, Danielle Romain, writes in a blog post, as if this came as a surprise.

Interestingly, the word “train” does not appear in the post, although that is very clearly what this data is used for: as raw material to train machine learning models.

Instead, the VP of Trust asks you whether you really don’t want to “help improve Bard and Vertex AI generative APIs” — “to help these AI models become more accurate and capable over time.”

See, it’s not about Google taking something from you. It’s about whether you’re willing to help.

On one hand that is perhaps the best way to present this question, since consent is an important part of this equation and a positive choice to contribute is exactly what Google should be asking for. On the other, the fact that Bard and its other models have already been trained on truly enormous amounts of data culled from users without their consent robs this framing of any authenticity.

The inescapable truth borne out by Google’s actions is that it exploited unfettered access to the web’s data, got what it needed, and is now asking permission after the fact in order to look like consent and ethical data collection is a priority for them. If it were, we would have had this setting years ago.

Coincidentally, Medium just announced today that it would be blocking crawlers like this universally until there’s a better, more granular solution. And they aren’t the only ones by a long shot.

source

Rinsu Ann Easo
Rinsu Ann Easo
Diligent Technical Lead with 9 years of experience in software development. Successfully lead project management teams to build technological products. Exposed to software development life cycle including requirement analysis, program design, development and unit testing and application maintenance. Has worked on Java, PHP, PL/SQL, Oracle forms and Reports, Oracle, Bootstrap, structs, jQuery, Ajax, java script, CSS, Microsoft Excel, Microsoft Word, C++, and Microsoft Office.

You May Also Like

UK Banks Seek Alternatives to Visa and Mastercard Amid Concerns

Financial institutions explore options for payment processing independence. Highlights: UK banks are seeking alternatives to Visa and Mastercard.Concerns arise...

Emirates NBD Leads $31 Million Round in Real Estate Investment Platform

Emirates NBD funds real estate platform for enhanced investment opportunities. Highlights: Emirates NBD leads a $31 million funding round.Investment...

NatWest Highlights AI Benefits for Customers Amid Digital Shift

The bank reports improved services through enhanced AI technology. Highlights: NatWest reports AI implementation has enhanced customer experiences.The bank...

Lloyds Investigates Use of Staff Bank Data During Pay Talks

The bank is reviewing data usage amid negotiations with staff over pay. Highlights: Lloyds Banking Group is reviewing data...