Interesting news for content creators

... of information for content marketers

I’m trying something different today for you little content goblins. I’m going to give you a round-up of some news articles that I found interesting over the last few days.

If you’re a content creator, SEO, or marketer, you should check out these articles.

Judge to Google: You’re a monopoly!

That’s right, a judge has ruled that Google has an illegal monopoly on internet search and advertising

… but here’s something I found even more interesting. The Verge has been covering this closely. There is one anecdote in particular that I found very interesting:

My crazy theory: Google might purposely make search worse in order to sell more search ads.

In response, Google’s PR team shared this completely different interpretation on X. The spin machine is in high gear at the former “do no evil” company!

Is ChatGPT Easy To Identify as AI Content?

Programmatic SEO legend Ian Nuttle posted something interesting on LinkedIn. When you compare ChatGPT output to Claude, ChatGPT’s output seems much more easily identifiable as AI.

Do you think one of these sounds like it was written by AI more than the other?

First Major Law Regulating AI Gets Passed in the EU

The AI Act is the first major law that aims to govern the way companies develop, use, and apply AI. It was given final approval by EU member states, lawmakers, and the European Commission.

It’s mostly focused on regulating large US tech companies that are driving most of the current innovation in AI.

One area they are focusing on is what they consider “High Risk” applications of AI.

Examples of high-risk AI systems include autonomous vehicles, medical devices, loan decisioning systems, educational scoring, and remote biometric identification systems.

The law also imposes a blanket ban on any applications of AI deemed “unacceptable” in terms of their risk level.

Unacceptable-risk AI applications include “social scoring” systems that rank citizens based on aggregation and analysis of their data, predictive policing, and the use of emotional recognition technology in the workplace or schools.

I’m not typically a fan of regulations, but some rules governing the things above don’t sound that unreasonable.

The law includes some exceptions for open-source models, but the models must fully open-source their parameters, weights, and architecture. I think the LLAMA models from Meta would meet this requirement.

Wrapping up

Is it really a joke if it’s true? Don’t @ me though…