
Are there generative AI tools I can use that are perhaps slightly more ethical than others?
âBetter Choices
No, I don’t think any one generative AI tool from the major players is more ethical than any other. Hereâs why.
For me, the ethics of generative AI use can be broken down to issues with how the models are developedâspecifically, how the data used to train them was accessedâas well as ongoing concerns about their environmental impact. In order to power a chatbot or image generator, an obscene amount of data is required, and the decisions developers have made in the pastâand continue to makeâto obtain this repository of data are questionable and shrouded in secrecy. Even what people in Silicon Valley call âopen sourceâ models hide the training datasets inside.
Despite complaints from authors, artists, filmmakers, YouTube creators, and even just social media users who donât want their posts scraped and turned into chatbot sludge, AI companies have typically behaved as if consent from those creators isnât necessary for their output to be used as training data. One familiar claim from AI proponents is that to obtain this vast amount of data with the consent of the humans who crafted it would be too unwieldy and would impede innovation. Even for companies that have struck licensing deals with major publishers, that âcleanâ data is an infinitesimal part of the colossal machine.
Although some devs are working on approaches to fairly compensate people when their work is used to train AI models, these projects remain fairly niche alternatives to the mainstream behemoths.
And then there are the ecological consequences. The current environmental impact of generative AI usage is similarly outsized across the major options. While generative AI still represents a small slice of humanity’s aggregate stress on the environment, gen-AI software tools require vastly more energy to create and run than their non-generative counterparts. Using a chatbot for research assistance is contributing much more to the climate crisis than just searching the web in Google.
Itâs possible the amount of energy required to run the tools could be loweredânew approaches like DeepSeekâs latest model sip precious energy resources rather than chug themâbut the big AI companies appear more interested in accelerating development than pausing to consider approaches less harmful to the planet.
How do we make AI wiser and more ethical rather than smarter and more powerful?
âGalaxy Brain
Thank you for your wise question, fellow human. This predicament may be more of a common topic of discussion among those building generative AI tools than you might expect. For example, Anthropicâs âconstitutionalâ approach to its Claude chatbot attempts to instill a sense of core values into the machine.
The confusion at the heart of your question traces back to how we talk about the software. Recently, multiple companies have released models focused on âreasoningâ and âchain-of-thoughtâ approaches to perform research. Describing what the AI tools do with humanlike terms and phrases makes the line between human and machine unnecessarily hazy. I mean, if the model can truly reason and have chains of thoughts, why wouldnât we be able to send the software down some path of self-enlightenment?
Because it doesnât think. Words like reasoning, deep thought, understandingâthose are all just ways to describe how the algorithm processes information. When I take pause at the ethics of how these models are trained and the environmental impact, my stance isnât based on an amalgamation of predictive patterns or text, but rather the sum of my individual experiences and closely held beliefs.
The ethical aspects of AI outputs will always circle back to our human inputs. What are the intentions of the userâs prompts when interacting with a chatbot? What were the biases in the training data? How did the devs teach the bot to respond to controversial queries? Rather than focusing on making the AI itself wiser, the real task at hand is cultivating more ethical development practices and user interactions.