AI Made Friendly HERE

AI prompt engineering for accountants

As artificial intelligence becomes increasingly integrated into accounting workflows, the ability to write effective prompts has emerged as a critical skill. 

Whenever we’re doing company research, drafting audit memos, or extracting insights from SEC filings, the quality of the AI output depends heavily on how well we communicate with these AI tools. 

In my experience working with AI tools like ChatGPT, I’ve found that the difference between unreliable and useful results depends on the quality of the prompt. A high-quality prompt can help AI understand my expectation and complete the task with precision. Below are some of the tips that I’ve found helpful when writing prompts.

Write explicit and specific prompts with detailed instructions

I find it to be helpful when I’m giving AI detailed and explicit instructions related to the task. Otherwise, it might provide generic answers that are inconsistent with what I was expecting. For example, if I want ChatGPT to analyze the financial statements, instead of prompting “analyze the financial statement,” I might think of what some areas of focus are and write a detailed prompt. A better prompt might be:

“Help me analyze the attached financial statements.

Instructions:

  1. Calculate the current ratio, debt-to-equity ratio and return on equity for Q4 2020. 
  2. Compare these metrics to industry benchmarks for midsize retail companies provided in the attached PDF.
  3. Highlight any ratios that deviate by more than 15% from the industry average.” 

Unlike a human analyst who might already be familiar with your work style, the key is to treat AI as a capable but memoryless analyst who has no clue of what you’re expecting unless explicitly specified. Writing clear, detailed instructions definitely helps AI to understand your expectations of the output.

Give your AI model a designated professional role

By assigning a role to the AI model, that AI can better understand the context and try to act consistently with the assigned role. For example, you might choose to assign a role as valuation specialist, revenue accountant or tax consultant, whichever is best suited for the underlying task.

Consider the difference between “review this lease agreement” versus “you are a senior technical accountant at PwC specializing in ASC 842 lease accounting. Review this lease agreement and identify all components that affect the initial lease liability calculation, including base rent, variable payments and embedded options.”

This technique improves the outputs because the underlying task is framed under a specified professional context, and AI will step into the shoes of the assigned role for the underlying task. For instance, when you assign the role of a valuation specialist, the AI will focus on economic reasoning and market assumptions. When you assign the role of an internal auditor, it will emphasize controls and risk assessment. 

Provide examples

Instead of asking AI directly for a task without any context, we can often feed examples such as the sample work products into the context window. AI could learn from these examples and produce outputs that better align with our expectations. Consider the difference between “complete the audit review memo for the valuation of convertible debt” versus “here are 2 sample review memos for the valuation of convertible debt under ASC 820. Now complete the audit review memo of ABC, Inc. convertible note valuation based on the attached work papers.”

This method is particularly helpful for recurring tasks like drafting memos and reports, given how similar types of memos tend to follow the consistent accounting jargons and logical flows. Examples are especially powerful for maintaining consistency in the produced outputs, ensuring the AI follows your firm’s specific format and tonality for deliverables like review memos and client reports.

Keep it structured

The primary interface with AI tools is usually the chat window, and it’s helpful to keep the prompt structured and properly labeled, especially when it spans a long context window. For example, if you’re asking the AI to summarize the data or draft a report, break the prompt into clearly labeled sections such as Inputs and Assumptions, Instructions, and Examples, then insert a detailed prompt under each section. After all, clear structure isn’t just for humans. Your AI will thank you in its way.

Originally Appeared Here

You May Also Like

About the Author:

Early Bird