h

Blog

Opinions Matter: The AI Partnership

At Rockstart, we do our best to stay current on the news happening in the tech world. This blog is part of a series where we present a topic that is a current issue in the news, and what different people from the team think about it.

Tech giants Amazon, Facebook, Google’s DeepMind division, IBM, and Microsoft have founded a new organization called the Partnership on Artificial Intelligence to Benefit People and Society. “One of the purposes of this group is really to explain and communicate the capabilities of AI, specifically the dangers and the basic ethical questions,” Facebook’s director of AI Yann LeCun said during a press briefing. But this can prove to be difficult – even if such a large group can agree on something as controversial as ethical principles, there is no current way to ensure the principles can and will be put into practice. (Quoted from a Wired article on the subject.)

In order to get a good idea as to exactly what people think about the AI partnership, we asked people to read an article and  comment on if they think that this partnership will benefit AI. These answers come from a variety of people with different backgrounds, different roles, and different levels of knowledge on the topic. Check out the answers below.

A benefit to the future and understanding of AI:

Arthur – “To facilitate a wider understanding of AI, and the possible outcomes of its further development, it’s important to open up the conversation. This means involving all stakeholders – i.e. academics studying the field, companies working on the actual technology and the general public, who will increasingly be affected as development progresses. A large part of this dialogue also means addressing risks, and being transparent about them. In my opinion, any endeavor that encourages knowledge-sharing between the major tech companies involved, and informing the larger public about AI’s implications is a good thing.”

Jan – “These big tech companies have already accumulated many of the leading minds in the field. Collaboration among them as well as with those outside experts that have a more societal/ethical perspective can only be beneficial to create a positive trajectory for this technology.

AI’s actions will be based on the data that you feed it. If that data is “racist”,  the AI will be as well (as in the case of the racist Microsoft chatbot). At the end of the day the question will be if we want AI to be a 100% reflection of human nature (with all its faults and biases) or a better version of ourselves.

Great Partnerships have a variety of partners:

Eric – “To be fair, I don’t know enough about the partnership to have a strong opinion. That said, traditionally speaking technology has always outpaced our ability to handle its power ethically. I do like the fact that they have non-AI researchers on the panel though as techies often get lost in their own world and forget how it impacts the average person on the street.

AI is a very complex topic. It is hard to comment without being an academic.”

The fear towards AI will be a focus

Karin – “Even just educating the general public about AI will reduce the fear that is common in something that they do not know about. i.e. moving to a foreign country will be very scary until you get to know someone there.

There is an interesting video where Elon Musk talks about this topic. Also, WaitButWhy did some awesome articles on the subject of AI and whether it will be our evolution or extinction. Part 1 can be found here, and part 2 can be found here.

Carolina – “Given that the purpose of our society is development, I think that AI knowledge can only grow from this partnership. On the other hand, it’s hard to put boundaries on what it’s ethical and what it’s not. I think it’s really important to define the goal of this partnership in order to limit the wrong decisions and, of course, be always as clear and transparent as possible with the outside world in order to address this common fear everybody has when it comes to AI.

If we have a look through history, every big change was scary at the beginning. That didn’t stop people to try their best in order to progress. Of course mistakes happened, but I think it’s part of the developing process: we can learn from our mistakes. AI it’s a big deal, but if we are able to involve in the process as many people’s opinion as possible, maybe we can understand better what to do and how to do it. Development has to look at the future while thinking in the present.”

There are still more questions

Sarah – “To some degree, yes, because I think the first step is acknowledging there are problems in AI with privacy, ethics, and fear of the unknown (job safety). But it’s hard to assess how this new organization will direct this new dialogue on the future and its relationship with AI. What kind of rules and attitude will they adopt towards AI? How will they deal with people who use AI for wrongdoing? It’s all unknown so I say yes with much hesitation.”

Catherine – “ I think it’s better than “no” thinking at all, but I feel like the financial and power interests these companies have in AI could possibly outweigh their “ethical” positions, so it’s something to be wary of.


Comments are closed.


Also on Rockstart