AI Made Friendly HERE

Speaker: When using AI, keep ethics in mind

Regardless of how a firm ultimately implements AI, Kim Petro, a practice advancement coach from Woodard, stressed the importance of doing so ethically. While AI has opened up a whole new world of possibilities, not all of them are necessarily positive, and so it is important to be mindful of the consequences of using AI solutions. 

She pointed to biased hiring algorithms that disadvantage certain people without the developers even realizing it, “so if someone turns in a resume with the wrong name and the algorithm is scrubbing that for appropriateness to the job, it can completely disregard it when it really should not be, there could be discrimination.” 

She said there is also evidence people are using AI to create financial advice without disclosing that it came from AI, which “oh my god is so dangerous, the liability we bring on ourselves by giving bad advice and not disclosing where we got it.”

AI ethics or AI law concept. Businessman with ai ethics icon on virtual screen for compliance, regulation, standard , business policy and responsibility.

Nirusmee – stock.adobe.com

Students, it is well known, are also using AI to cheat on their school work, she said. Meanwhile, she added, there are bots trolling social media for purposes ranging from ridiculous to nefarious. 

During her own talk at Scaling New Heights in Orlando, Petro outlined some key considerations to avoid some of the less ethical applications of AI. 

For one, people should be transparent about their AI use, making sure to always disclose when they use it; Petro herself disclosed she used AI for research and building course content, so “I need to tell people it’s not my own content, it’s scrubbing the Internet and grabbing bits and pieces from other things. I also need to use my own voice when using this content.” 

Ethical users also make sure they verify what their AI tells them, noting the propensity of certain language models to make things up wholesale. They should also be aware that not only can the model can wrong, it can also have bias, such as in the aforementioned case of the hiring algorithm. 

She also said users overall should try to respect intellectual property; she pointed to an example where board game designers were using AI to make art instead of hiring artists, but the artists the model drew from did not get proper attribution. 

Finally, she said that people need to be aware of the privacy and security risk of using AI, especially public models, especially free accounts on public models. This is because inputs can go right into the company’s servers, including any personal or financial information that generally needs to be kept private.

“We don’t want our confidential or sensitive information in there because if you use a free account it informed the generative model for everyone. Not good. Even if you use it just for financials, we highly recommend using a paid account so it is not informing the model,” she said. 

While many use AI to draft reports for clients, she said that if there is a risk the information will wind up with an unauthorized third party they should scrub all the identifying details from the prompt before entering it into the model, and then replace the information in the actual report itself. 

Other ethical AI uses she suggested include brainstorming, “not content replacement but to help us do the research,” as well as approved image generation for marketing purposes using appropriately licensed artwork, such as “you have a great logo and want to make a banner for LinkedIn.”

Professionally, there is also internal process generation and automation, basically “we can have ChatGPT write me a process for automating monthly close or bank recs, it doesn’t matter as long as we use it internally and, again, has human review. I cannot stress that enough.” 

The specific issues that a firm might face regarding AI ethics can vary greatly, and so she also stressed the importance of creating best practices and acceptable use policies, such as making sure a human reviews everything, only using tools that create audit trails, or assigning permissions with user roles. 

Overall, she said users should remember to: 

  • “Ask yourself, am I representing this content as my own? If you are, may God strike you down. Well, just kidding. But maybe think about it and really write that this was not your work, it was someone else’s.” 
  • “Could this harm instead of help? If I’m a board game designer, say I’ll save some money and use AI to create this graphic design. I won’t pay an artist to do that. So I scrub the Internet using AI and pull from different artists and it’s blatantly obvious and… [the artist] has nothing to protect their work.” 
  • “Would I be comfortable explaining how I use this to my client? You may have clients who’re not tech savvy and you tell them you put something in AI and they say ‘you put all my information to all robots everywhere!’ They freak out. Would they be okay with you using AI? Are you telling them upfront you use AI but, hey, we won’t have your personal information out there, are you okay with that?”
  • “Normalize conversations around ethical technology use. This is a bnig thing, especially with policies and enforcing them. AI is changing every day, evolving fast, so fast that we have to keep up with the conversations and learning and make sure we stay on top of it.”  
  • “Above all, lead with integrity and curiosity.”

Originally Appeared Here

You May Also Like

About the Author:

Early Bird