Guardrails for AI in Accounting – Insights from AICPA and the Big Four

Guardrails for AI in Accounting – Insights from AICPA and the Big Four

Artificial intelligence is transforming accounting and finance — from automating reconciliations to drafting technical memos. But without guardrails, it can introduce risks that threaten compliance, client trust, and professional integrity.

The American Institute of CPAs (AICPA), the Big Four firms, and regulators worldwide agree: AI must be used responsibly, with human oversight and strict boundaries. Here’s how leading voices are setting the rules.

AICPA’s Perspective

The AICPA has been clear: AI offers efficiency, but also risks — from confidentiality breaches to unreliable output.

Key guidance includes:

  • Treat AI like a public website – Never input client-sensitive data into public tools like ChatGPT. Once shared, you can’t control where it goes.
  • Draft clear AI use policies – Specify permitted tasks (e.g., research, drafting) and require training so staff understand both capabilities and risks.
  • Maintain human skepticism – AI doesn’t fully grasp professional standards or context. Removing human review can lead to “disastrous consequences.”

In short: AI can help prepare work, but it’s the CPA’s responsibility to verify accuracy and compliance.

Big Four Firm Guidelines

PwC

PwC is embracing AI — but only in controlled, secure environments.

  • Developed an internal “AI co-pilot” for tax research.
  • Partnered with OpenAI to create a private, enterprise ChatGPT so client data stays protected.
  • Follows “Responsible AI” principles: governance, transparency, and human accountability in every use case.

PwC’s approach sends a clear message: data security is non-negotiable.

Deloitte

Deloitte’s AI playbook revolves around risk management and trust.

  • Requires human review, audit trails, and regular AI model testing.
  • Advises disclosing to stakeholders where AI is used in finance processes.
  • Deloitte’s Ryan Hittner stresses: “Can business users trust GenAI outputs? Only with strong validation and oversight in place.”

Deloitte treats AI as a support tool, not a decision-maker — insisting on robust governance procedures for every implementation.

EY & KPMG (Briefly)

  • KPMG has an AI governance framework that mandates bias checks, explainability, and restricted use cases.
  • EY trains staff extensively on AI ethics and compliance before granting access to AI tools.

Regulators & Standards Setters

Formal regulations are still emerging, but the trend is clear:

  • PCAOB and IFAC are exploring AI’s role in audit and assurance.
  • CPA Canada urges firms to implement AI governance before integrating AI into financial reporting.

Regulators are watching — expect more formal guidance in the next few years.

Emerging Risks in AI for Accounting

  1. Black-Box AI & Auditability Accountants must be able to explain every number in financial statements. If an AI generates a valuation or allowance without transparency, it can’t be used as final evidence. Firms insist on explainable AI and full documentation.
  2. Hallucinations & Bias AI may produce incorrect citations, outdated regulations, or biased results. Professional bodies recommend verifying every fact before use.
  3. Compliance Risks Using public AI for client work can breach independence rules, data privacy laws, and professional conduct codes.

Actionable Steps for Firms

If your firm doesn’t have an AI policy, you’re already behind. Here’s where to start:

  • Create a formal AI usage policy – Define acceptable use cases, prohibited tasks, and required approvals.
  • Train your staff – Teach risks, governance principles, and how to identify hallucinations or bias.
  • Use enterprise AI solutions – Ensure data privacy, audit trails, and role-based permissions.
  • Implement internal controls – Monitor AI use, require documentation, and keep review/approval logs.
  • Ban public AI for client work until governance structures are fully in place.

AI can be a game-changer in accounting — but only if it’s used responsibly. As AICPA guidance and Big Four policies show, the key is to combine AI’s speed with human judgment and ethical boundaries.

With clear policies, training, and secure tools, firms can leverage AI’s benefits while protecting client trust and ensuring compliance.

To view or add a comment, sign in

Explore topics