Should my small business use ChatGPT?+
Probably yes — but with governance. AI tools like ChatGPT, Claude, and Microsoft Copilot deliver real productivity gains for drafting, summarizing, research, code, and customer support. The question isn't whether to use AI; it's how to use it responsibly: which tools, on which accounts (business, not personal), with what data classification rules, with what audit trail. Businesses that ignore AI fall behind. Businesses that adopt AI without governance end up with client data in third-party training pipelines and compliance findings nobody saw coming.
Is it safe to paste client data into ChatGPT?+
It depends on which ChatGPT you're using. Free ChatGPT and ChatGPT Plus on consumer accounts: by default, conversations may be used for model training, and you have no Business Associate Agreement or signed data processing agreement. Pasting client PII, PHI, financial data, or attorney-client privileged content into these is a meaningful data-handling risk. ChatGPT Enterprise, ChatGPT Team, and OpenAI API: conversations are not used for training by default, and a data processing agreement is available. The same distinction applies to Anthropic Claude and Google Gemini. Use the business tier, not the consumer one.
What is the difference between consumer ChatGPT and ChatGPT Enterprise?+
Consumer ChatGPT (Free and Plus) is designed for individual personal use — conversations may be used for model training by default, support is community-based, and there's no admin console or data processing agreement. ChatGPT Enterprise and Team are business tiers — conversations are not used for training by default, there's an admin console for centralized management, SSO and SCIM integration with Microsoft Entra ID or Google Workspace, audit logging, and a data processing agreement. For any business use involving client or proprietary data, Enterprise or Team is the right tier.
Can a healthcare practice use AI and still be HIPAA-compliant?+
Yes, but only with the right AI tier. The default consumer versions of ChatGPT, Claude, and Gemini are not HIPAA-eligible — no Business Associate Agreement is offered. The HIPAA-eligible options as of 2026 include: Microsoft 365 Copilot (covered under the Microsoft BAA when properly activated), Azure OpenAI Service (covered under the Microsoft BAA), Anthropic Claude via AWS Bedrock with HIPAA-eligible AWS account, and certain healthcare-specific AI vendors with signed BAAs. Practices using AI without a BAA in the chain have a HIPAA gap regardless of how careful staff are with the inputs.
What is an AI acceptable-use policy?+
A written policy that tells staff which AI tools are approved, which accounts to use (business tier, never personal), what data classifications are allowed in which tools (no client PII in consumer ChatGPT, period), how to log AI-generated content in client work, and what disclosure obligations apply to clients and regulators. Most small businesses we audit have zero written AI governance — staff are pasting client data into personal ChatGPT accounts and nobody has told them not to. Simply IT drafts and maintains a tailored AI acceptable-use policy for every Simply Secure and Simply Compliant client.
What is Microsoft 365 Copilot and how much does it cost?+
Microsoft 365 Copilot is the generative AI assistant embedded across Word, Excel, PowerPoint, Outlook, Teams, and the broader Microsoft 365 surface. It pulls context from your organization's own documents, emails, calendars, and chats (governed by your existing Microsoft permissions) to draft, summarize, analyze, and answer questions. Pricing in 2026 is $30 per user/month on top of an underlying Microsoft 365 Business Standard or Business Premium license — annual commitment, minimum tenant license requirements vary by region. For businesses already on Premium, Copilot is the most natural enterprise-AI entry point.
What is Anthropic Claude and how is it different from ChatGPT?+
Anthropic Claude is a family of large language models developed by Anthropic, a US-based AI safety company. Functionally Claude is comparable to ChatGPT for most business tasks (writing, summarization, code, analysis), with particular strengths in longer-context work and instruction-following. Differences for business use: Claude is available via Anthropic's own API and via cloud platforms (AWS Bedrock with HIPAA-eligible accounts, Google Cloud Vertex AI), Claude.ai consumer and Team tiers operate similarly to ChatGPT consumer/Team. Many businesses use both Claude and ChatGPT for different tasks; they're not mutually exclusive.
What is shadow AI and is it a real risk?+
Shadow AI is the use of AI tools by employees on personal accounts, outside the visibility of the business's IT and security governance — typically free ChatGPT, free Claude, or free Gemini accounts used to summarize client documents, draft client communications, or analyze internal data. The risk is real: data leakage into third-party training pipelines (with consumer tiers), no audit trail, no data processing agreement, and compliance findings when regulators or clients ask “what AI did you use on our matter?” The fix is the AI acceptable-use policy plus making the approved business-tier AI tools easy enough that staff don't reach for personal accounts.
How much should a small business expect to spend on AI tools?+
Realistic 2026 budget for a 10-person office adopting AI thoughtfully: $30 per user/month for Microsoft 365 Copilot if you want AI inside Outlook/Word/Excel/Teams (~$300/month for 10 people), or $25–30 per user/month for ChatGPT Team or Claude Team if you want a standalone chat workspace (~$250–300/month for 10 people). Some businesses use both. Add a one-time $1,500–3,000 for the AI acceptable-use policy, training, and governance setup if you don't already have IT support handling it. Total first-year AI spend for a thoughtful 10-person rollout: roughly $5,000–8,000.
How does Simply IT govern AI access for clients?+
On Simply Secure and Simply Compliant tiers we (a) draft the client's AI acceptable-use policy, (b) configure Microsoft 365 Copilot or the chosen alternative under the business's identity provider with SSO and audit logging, (c) block or restrict consumer-tier AI tools where appropriate via Conditional Access and DNS filtering, (d) train staff on which tools to use for which data, and (e) document everything to be audit-defensible. The goal isn't to block AI — it's to make the right AI easy and the wrong AI hard, which is exactly what compliance regulators are starting to expect.