
There is a bizarre, almost comic distortion happening in the world of business strategy and AI: experts popping up overnight who promise miraculous results, tools that claim to replace thinking with a button press, and projects that gobble budgets while delivering little. The field that should be helping companies grow and adapt has been swamped by noise — and the consequences are real. This piece cuts through the hype, explains what works, outlines what fails, and gives a practical playbook for using AI the right way: to augment strategic thinking, not substitute for it.

Table of Contents
- Start with a simple test: are you solving a real problem?
- Why fake experts flourish
- How strategic thinking really works: three buckets
- Real example: small changes, massive growth
- The origin story that matters
- Generative AI: why it matters and what people misunderstand
- Why AI projects fail — the hard truth
- The remote control analogy: a lesson in product design and AI
- Red flags when evaluating AI vendors and consultants
- Checklist: questions to ask potential AI partners
- Hallucinations: treat them as signals, not just bugs
- How to integrate AI into strategy work without breaking everything
- Practical templates: prompts and governance
- Role of leaders: enable judgment, not worship outputs
- When to build vs buy
- Sample adoption roadmap for a six-month pilot
- Vendor procurement playbook (short)
- Case study snippets
- Practical signal you’re getting value from AI
- Final principles to remember
- How do I tell the difference between a real AI expert and a fake one?
- What is the first pilot I should run with AI in strategy?
- How do I reduce hallucinations in AI outputs?
- Should I buy an off-the-shelf LLM tool or build our own?
- How do we get frontline employees to adopt AI-driven strategy tools?
- What are immediate red flags when evaluating AI projects?
- Can AI replace strategy consultants?
- Next steps
Start with a simple test: are you solving a real problem?
Before buying a product or hiring an “AI expert,” ask one question: what problem will this solve for the people who actually do the work? Too many solutions are designed to impress executives with slick decks while ignoring the frontline reality. If the plant manager, the call center clerk, or the customer-facing salesperson cannot see how the change helps them do their job better, the strategy sits on a shelf. Culture eats strategy for breakfast — and without adoption, a beautiful plan is just wallpaper.
Why fake experts flourish
There are three structural reasons the market is full of noisy, ineffective “experts.”
- Low barrier to entry: Access to large language models and no-code tools has made it easy to create products and consulting offers that look sophisticated but are shallow under the surface.
- Performance theater: Clients often want reassurance more than results. A glossy deck and confident language can persuade decision makers even when the underlying approach is flawed.
- Misunderstanding of the tool: Many providers claim they “train the LLM” when in fact they are only writing prompts. There is a real technical and methodological gap between tuning models, integrating them safely, and simply leaning on prompts to generate output.
That combination creates fertile ground for vendors who promise everything to everyone. If a vendor says “we do everything for everybody,” that is a red flag — run. Tools and consultancies that mass-market one-size-fits-all solutions are almost guaranteed to deliver superficial results.

How strategic thinking really works: three buckets
Effective strategy falls into three complementary domains. Treating them as interchangeable is a big cause of failure.
1. Marketing strategy
This is not the same thing as execution. Marketing strategy defines who you target, why they care, and the path you want them to take.
- Segments and personas: Which groups of people matter? What are their needs?
- Customer journey mapping: How do prospects discover, evaluate, and buy?
- Brand archetypes: What role does your brand play in their life?
Only after you have the strategy do you “throw it over the wall” to designers and marketers who create the assets and campaigns. If the strategy is fuzzy, execution becomes a guessing game.
2. Competitive strategy
Competing by copying competitors often leads to price wars. The reflex to “just lower price” is one of the quickest paths to margin collapse. Compete on different dimensions: service, bundled features, distribution, or a derivation of your core strengths that opponents find hard to replicate.
3. Blue Ocean strategy
Blue Ocean is about creating new demand instead of slicing existing demand. It is the art of redefining who the buyer is.
Example: The console wars. While Sony and Microsoft competed on raw horsepower for hardcore gamers, a different move won for another player: design a product that attracts entirely new buyers — non-gamers, retirees, stay-at-home parents. That redefinition skyrocketed sales and avoided a brutal margin race.

Real example: small changes, massive growth
Success frequently comes from small but strategic moves. One classic case: a handheld gaming product was already selling to boys and young men. Instead of trying to steal market share in an already saturated niche, the company asked a simple question: who else might buy this device? The answer — women and girls — led to a modest product tweak (color variations) and a dramatic market expansion. The lesson: redefining the buyer group costs little and can unlock new demand.

The origin story that matters
Good strategy tools come from solving real problems, not from chasing the latest buzz. A meaningful origin — building a tool to help busy executives move from drawing elementary curves to actually thinking about growth — produces different outcomes than a vendor that woke up overnight to “monetize LLMs.” When a tool is born from deliberate research and iterative product work (research with users, integration with workflows, attention to human adoption), it avoids much of the drift that afflicts many modern AI offerings.

Generative AI: why it matters and what people misunderstand
Generative AI did not start last year. The discipline traces back decades. The big leaps that made modern tools possible include the rise of the web, search, and then transformer architectures that can read and map relationships in text. That architecture gave rise to models that can generate plausible text by running the “reading” process backward. The famous slogan “attention is all you need” is no marketing line; it describes the transformer innovation that enabled these systems to work.

Here is what matters for strategy teams:
- Relevance wins: A system that can identify signal in noise rewards content that is genuinely useful to readers or users. The wrong response is to bludgeon the model with boilerplate and expect results.
- Context is king: Models do not magically understand your company’s unwritten knowledge. Feeding the right context — customer feedback, product metrics, operational constraints — is essential.
- Costs are real: Running state-of-the-art models at scale costs money. If a provider promises a magic free solution, evaluate how they can sustain the operation. If you are paying a subscription, understand what you are really buying versus what the vendor is subsidizing.

Why AI projects fail — the hard truth
MIT and others report high failure rates for AI transformations. The reasons are predictable:
- Misaligned goals: A technical demo is not a business KPI. If the pilot focuses on model accuracy and not on adoption, it becomes an engineering victory on a dead product.
- Poor vendor selection: Shiny claims do not equal competence. Distinguish between teams that can ship reliable integrations and those that can generate marketing collateral.
- Lack of process change: AI can accelerate tasks, but if you do not redesign the process and empower people to use the outputs, the model’s benefit evaporates.
- Featureitis: Adding more capabilities into a product without trimming complexity increases cost and confusion. That comes back to the remote control analogy: more buttons rarely mean more value.

The remote control analogy: a lesson in product design and AI
Consider the remote control that keeps adding buttons. Each new button must be designed, tested, manufactured, supported, and documented. The result is a bloated interface that confuses the majority of users while marginally delighting a tiny minority. That pattern repeats in AI products:
- Feature creep raises costs.
- Features without clear user benefit create friction.
- Complex tools that emulate every possible use case are often worse than simple tools optimized for a core job.
The antidote is ruthless prioritization: identify the few functions that create the most value, remove the rest, and focus on making the core experience delightful.
Red flags when evaluating AI vendors and consultants
Protect your investment by watching for these warning signs:
- “We do everything for everybody” — an inability to define a focus is a sign of inexperience.
- Vague claims about training — check whether they mean fine-tuning a model, engineering prompts, or simply curating outputs. Training and fine-tuning require robust data, infrastructure, and evaluation frameworks.
- Overreliance on generic LLM outputs — if the deliverable is pages of generic content or a deck full of MBA buzzwords, it is cheap and disposable.
- No plan for adoption — technical rollout without change management indicates the vendor does not expect users to adopt the solution.
- No measurable KPIs — ask for baseline metrics and post-implementation targets tied to real business outcomes.
- Opaque pricing — if you cannot map costs to deliverables and expected scale, proceed cautiously.
Checklist: questions to ask potential AI partners
- What specific business problem will this solve, and how will success be measured?
- Who on your team has domain expertise in our industry and in data/model operations?
- What data will you need, and how will data quality be assessed and improved?
- Are you fine-tuning models, building pipelines, or using prompts? Explain the difference and your approach.
- How will you mitigate hallucinations and ensure output accuracy?
- What is the adoption plan? Which users will be trained and how will feedback be collected?
- What are the ongoing costs for compute, hosting, and maintenance?
- Can you provide case studies with measurable results, not just testimonials?
Hallucinations: treat them as signals, not just bugs
AI models sometimes invent facts or invent plausible-sounding but incorrect answers. The instinct is to dismiss those outputs as “hallucinations” and move on. That is shortsighted. When a model proposes an unexpected buyer group, product idea, or route to market, pause:
- Is this something we overlooked?
- Could the model be extrapolating from subtle signals in our data?
- Does the idea merit human validation through research or small experiments?
Not every hallucination is a golden insight, but some are. Use them as prompts for disciplined inquiry rather than as final answers. Ask, is the machine seeing something I don’t see?

How to integrate AI into strategy work without breaking everything
AI is best used as an augmentation layer — a thought partner that helps you explore options faster, map scenarios, and generate prototypes. Here is a practical roadmap.
1. Define a narrow, measurable use case
Start with a single question: What one strategic decision, if improved, would create the most value? Narrow pilots outperform broad experiments every time.
2. Involve the people who will use the outputs
Bring plant managers, customer support reps, and marketing operators into the process. Their buy-in is the key to execution. If they do not feel the tool helps them, you will not see ROI.
3. Provide context, not just raw prompts
Feed the model with curated customer feedback, product metrics, and operational constraints. The model is an amplifier of what you provide; garbage in yields amplified garbage out.
4. Establish guardrails and review loops
Design prompt templates, human-in-the-loop review steps, and escalation paths for high-risk recommendations. Reduce hallucinations by aligning validation steps to your risk appetite.
5. Measure adoption and impact
Track usage metrics, time-to-decision, conversion rates, margin improvements, or employee satisfaction changes. Tie the metric to the initial business case and donor sponsor.
6. Iterate and scale
Expand the scope only after hitting adoption and ROI targets. Use learnings to refine data pipelines, prompt libraries, and decision flows.
Practical templates: prompts and governance
Prompts are not magic spells. Structure them so outputs are usable:
- Context section: 2-3 short paragraphs summarizing the business problem, KPIs, and known constraints.
- Data summary: bullets describing the most relevant metrics and customer quotes.
- Request: clear, narrow instructions specifying the format you want back (e.g., prioritized list of three options with estimated impact and next steps).
- Validation rule: ask the model to list assumptions and propose how to test each assumption quickly.
Governance matters: maintain a living register of prompts, link each prompt to outcomes, and require human sign-off before any high-impact action is taken.
Role of leaders: enable judgment, not worship outputs
Leadership matters more than ever. The technical stack can surface options, but someone must choose. Encourage leaders to treat AI outputs as a set of hypotheses. Use them to expand the exploration set, but keep human judgment in the loop. Reward employees who bring insights from the field and pair them with AI-generated options for validation.
When to build vs buy
If your industry requires deep domain models (e.g., complex compliance, proprietary manufacturing processes), building bespoke models may pay off. If the value is in faster ideation, better content, or summarization of public sources, buying a vertical solution or integrating a trusted SaaS product is faster and less risky. Focus on the business capability, not the underlying model type.
Sample adoption roadmap for a six-month pilot
- Week 1-2: Define the business case and success metrics. Identify stakeholders and frontline users.
- Week 3-5: Collect data, assemble prompt templates, and design user flows.
- Week 6-8: Build prototypes and run small user tests with 5-10 frontline users.
- Week 9-12: Measure adoption and impact. Iterate prompts and UX based on feedback.
- Month 4-6: Scale to additional teams, tighten governance, and document ROI for stakeholder buy-in.
Vendor procurement playbook (short)
- Request a live demo using your data or a sanitized version of it.
- Ask for a 30-day pilot with clear exit criteria.
- Require documentation of data usage, model drift monitoring, and privacy safeguards.
- Insist on a maintenance and update SLA for the first year.
Case study snippets
Label printing and serendipity. Solving one technical problem exposed a much larger strategic opportunity. A system built to automate expert legal tasks also surfaced a capability to print labels on the web. That small overlap led to a sale to a major manufacturer and the creation of a new business group. The moral: small functionality can surface unexpected value when you pay attention to real user needs.
Repositioning a buyer group. The pink handheld example demonstrates how a low-cost, focused product change — and a re-think about whom the product is for — unlocks growth without a massive marketing budget. The right strategic question beats the loudest campaign 9 times out of 10.
Practical signal you’re getting value from AI
- Frontline employees use the tool daily and reference it in decision meetings.
- Output quality improves after iterations and user feedback cycles.
- KPIs tied to the pilot move in the expected direction (time saved, conversion, margin, or NPS).
- The system helps identify non-customers and new segments that your team can validate quickly.

Final principles to remember
- AI augments; it does not replace judgment. Use it to expand what your team can imagine, not to short-circuit decision making.
- Simplicity beats complexity. Remove features that confuse more people than they help. Lower cost and raise value by focusing on the core job to be done.
- Involve the people who execute. Adoption comes from ownership. Make the plant manager, the support rep, and the salesperson co-creators of the new process.
- Measure adoption and impact. If you cannot show measurable improvement in a pilot, do not scale.
- Be skeptical of grand promises. If something sounds too good to be true, it probably is. Ask for evidence and a small, fast pilot.
How do I tell the difference between a real AI expert and a fake one?
A real expert explains trade-offs, acknowledges limitations, and provides concrete examples of measurable outcomes. They can describe whether they will fine-tune a model, build data pipelines, or simply use prompts. Fake experts promise everything, lack domain examples, and avoid clear success metrics.
What is the first pilot I should run with AI in strategy?
Pick a narrow decision with measurable impact: segment prioritization, competitive response options, or a customer support summarization task. Define success metrics, involve frontline users, and schedule a 6-12 week pilot with clear stop/go criteria.
How do I reduce hallucinations in AI outputs?
Provide high-quality context, require the model to list assumptions, use human-in-the-loop review for critical outputs, validate surprising suggestions by testing, and keep provenance logs of data sources used to generate each recommendation.
Should I buy an off-the-shelf LLM tool or build our own?
If the need is generalized generative text or lightweight analysis, buy a trusted vertical tool. If you have proprietary data, domain complexity, or unique risk requirements, plan to build with a partner and allocate resources for data engineering and model maintenance.
How do we get frontline employees to adopt AI-driven strategy tools?
Involve them early, give them control over the outputs, provide easy interfaces, and tie tool usage to concrete improvements in their day-to-day metrics. Training sessions and quick wins build momentum.
What are immediate red flags when evaluating AI projects?
Vague promises, no measurable KPIs, one-size-fits-all claims, inability to demonstrate pilot outcomes, and opaque pricing are immediate red flags. Also beware of teams that confuse prompt engineering with actual model training.
Can AI replace strategy consultants?
Not entirely. AI can speed ideation and generate scenario options, but strategy requires human judgment, cultural alignment, and execution planning. The best approach is a partnership where AI augments consultants and internal teams rather than replacing them.
Next steps
If you are testing tools, insist on a real pilot with user involvement and KPIs. If you are evaluating vendors, use the checklist above. If you are building, focus on a narrow, high-value use case and plan for governance and adoption from day one.
The AI revolution is real and powerful, but it is a tool that reveals and amplifies strategic thinking — not a magic bullet. Be skeptical, focus on users and outcomes, and use AI to expand your capacity to think rather than to outsource thinking entirely. Enjoy the journey, and measure what matters.






