Let's Play Safe: Embracing a Privacy-First Approach in the New Gaming Era

 


In 2023, things got pretty wild in the tech world with all the buzz about AI. Policy folks were diving deep into talks about how to keep the new artificial intelligence (AI) tech in check. It all kicked off when ChatGPT showed up in 2022, and it wrapped up with a big agreement on the EU AI Act. They're still tweaking the final details, but it seems like the Western world might be getting its first-ever "AI rulebook." It's supposed to protect people from AI mess-ups, but it's not hitting the bullseye in some key areas, especially when it comes to safeguarding the rights of the most marginalized folks.


Right after that, in November 2023, the UK Government threw a shindig called the AI Safety Summit. Bigwigs from around the globe, important industry players, and a few civil society groups got together to chat about the risks of AI. It's cool that people are finally talking about how to keep AI in check, but the big question for 2024 is whether all this talk will turn into action.


Sure, AI has its perks, but we can't ignore the downsides. When AI tools start messing with society, doing things like mass surveillance or discrimination, it's trouble. These AI systems often get trained on tons of private and public info, which is a recipe for biased outcomes and making inequalities worse. From predictive policing to deciding who gets healthcare or social help, to tracking migrants and refugees – AI keeps stepping on the rights of the folks who are already struggling. And let's not forget about fraud detection algorithms causing financial chaos for ethnic minorities or facial recognition tech being used to target specific communities and support unjust systems.


Now, why's it so hard to regulate AI? First off, the term "AI" is pretty vague. It covers a bunch of different technologies and applications, making it tough to nail down a single definition. AI shows up in lots of places, both public and private, so there are loads of different people involved in making and using it. These systems aren't just hardware or software; their impact depends on where and how they're developed and used. Regulating this stuff is like trying to grab a handful of slippery eels.


As we roll into 2024, it's not just about making sure AI is designed with rights in mind. It's also about making sure the people affected by these technologies have a real say in how things are done. The EU, UK, US, and others are laying out their plans for dealing with the risks of AI. But no matter how complicated the laws get, the focus should always be on protecting people from AI mess-ups now and in the future.


At Amnesty, we're clear about what any rules for AI should include. They need to be legally binding and address the harms people are already facing from these systems. The fancy-sounding "responsible" development of AI, which the UK is all about, isn't enough. It needs to be made official in the law. Plus, any rules should go beyond just checking the tech's technical bits. We need bigger checks and balances to make sure these systems don't trample on human rights. Banning certain AI systems that mess with human rights should always be an option, no matter how fancy they claim to be.


Others need to take notes from the EU and close up any loopholes for companies to dodge the rules. Getting rid of exemptions for AI in national security or law enforcement is a must. And if one place says "no" to certain AI systems, there shouldn't be sneaky ways for those systems to end up causing harm in other countries. This is a big problem with the UK, US, and EU plans – they're not thinking about how these tech imbalances might hurt people in the Global Majority. There are already cases of companies using AI tools in Kenya and Pakistan and not playing fair with the workers there.


So, here's the deal as we step into 2024: AI needs to play by the rules and respect human rights. And the folks getting hit by these technologies need a real seat at the decision-making table. We're not just looking for talk from lawmakers; we need solid rules that hold companies and big players in check. Global talks are cool, but we can't forget the importance of solid rules on the national level. This is where real accountability kicks in, making sure that when AI messes up, the folks affected can seek justice.

1 Comments

  1. "The 'Fan Art Showcase' is a creative way to involve the audience – turning readers into contributors."

    ReplyDelete
Previous Post Next Post