Tuesday, February 17, 2026

AI comparisons

Here’s a concise, “all‑axes” comparison of the current top‑tier stacks: Gemini (2.5 Pro / 3 Pro), GPT (GPT‑4.1 / ChatGPT‑5 family), and Claude (Claude 4 Sonnet/Opus). I’ll focus on what actually differs in practice.openai+4

High‑level positioning

  • Gemini: Multimodal/context monster with tight Google integration and massive context; strongest when you mix text, code, images, audio, video or huge corpora.itecsonline+2

  • GPT: Most generalist and tool‑rich; very strong reasoning + coding with the broadest native multimodal stack and ecosystem (plugins, agents, third‑party tools).sigmabrowser+3

  • Claude: Deliberate thinker optimized for coding, long‑form analysis, safety, and “extended thinking”; strongest on complex software and careful explanations.anthropic+3

Capability table (late‑2025 / early‑2026 snapshot)

DimensionGemini top tier (2.5 Pro / 3 Pro)GPT top tier (GPT‑4.1 / ChatGPT‑5)Claude top tier (Claude 4 Sonnet / Opus 4.x)
Core strengthMultimodal + huge context + Google integration.datastudios+2Balanced reasoning, coding, multimodal, tools.openai+2Best‑in‑class coding and careful reasoning.anthropic+3
Reasoning (general)Competitive, especially with long or multimodal inputs; slightly behind GPT/Claude on some pure text reasoning benchmarks.itecsonline+1Very strong general reasoning; 1M‑context “needle in haystack” and graphwalks evals show robust long‑context logic.openai+1Strongest on many structured reasoning / analysis tasks with “extended thinking” turned on.anthropic+2
CodingGood–very good; below Claude and often a bit under GPT on SWE‑bench‑style tests.itecsonline+2Strong, versatile coding and debugging across languages; not the top on SWE‑bench but solid.glbgpt+2Industry‑leading SWE‑bench Verified (≈72–75%, up to ≈80% with parallel compute).anthropic+4
Multimodal I/ONative text, images, audio, video in one stack; strong at video and document‑style vision.blog+3Full unified multimodal (text, image, audio, video) with mature tools; very flexible.openai+2Primarily text‑centric with maturing image/PDF input; less emphasis on full video/audio pipeline.anthropic+3
Context windowUp to ≈2M tokens on Pro in Vertex / high tiers; Flash also very large.docs.cloud.google+4Up to 1M tokens in GPT‑4.1 family (API), smaller in standard UI.learn.microsoft+4Commonly ≈200k in Sonnet/Opus 4; up to 500k in some enterprise tiers.platform.claude+2
Long‑context qualityDesigned for massive document/code/video workloads; strong but not always best at fine‑grain reasoning inside huge contexts.datastudios+2Very good: 1M‑token “needle” and graphwalk benchmarks show robust retrieval in long context.openai+1Good, but smaller max windows; shines more in careful reasoning than sheer size.platform.claude+1
Speed / latencyFlash models extremely fast (hundreds of tokens/s, sub‑0.3s TTFT); Pro slower but still competitive.deeplearning+2Balanced; faster than earlier GPT‑4, not as fast as Gemini Flash in most reports.glbgpt+2Sonnet often mid‑pack for speed; extended‑thinking modes intentionally slower for harder tasks.anthropic+1
Pricing (API ballpark)Aggressive on context (lots of tokens per dollar); very good value for multimodal + long context.google+2Generally mid‑range per million tokens; good value given ecosystem and tools.openai+1Tends to be pricier at the top tiers, but cost justified for orgs that value coding + safety.anthropic+2
Safety / alignmentStrong Google safety stack, guardrails, and filters; conservative on some topics.blog+2Mainstream OpenAI approach; improved guardrails, but more permissive than Claude on many tasks.openai+2Most safety‑constrained; Anthropic centers “constitutional AI” and conservative defaults.anthropic+2
Tooling & agentsDeep integration with Google (Workspace, Search, Maps, YouTube) plus Vertex AI; agentic features and Deep Research in Gemini Advanced.blog+3Rich tool ecosystem: function calling, workflows, agents, plugins; strong third‑party ecosystem.openai+2Strong for enterprise agents (Bedrock, Vertex, Anthropic API), with emphasis on reliability and governance.anthropic+2
Ecosystem & adoptionNatural choice for Google‑centric orgs; strong in data/Docs/Sheets, Android, and Chrome contexts.gemini+2Broadest developer and consumer footprint; many libraries, UIs, and SaaS products built around GPT APIs.openai+2Popular in enterprises that care about risk, compliance, and code quality; embedded in tools like GitHub Copilot agents and Bedrock.anthropic+2

Detailed axes

1. Reasoning and long‑context work

  • Gemini: Very strong at multi‑step reasoning when the task uses its strengths—huge context or multimodal inputs—but independent testing often puts it just below Claude for hard coding/logic and roughly on par or slightly under GPT‑4.1 on pure text benchmarks.datastudios+4

  • GPT: Excellent generalist; 1M‑token context plus strong “needle‑in‑a‑haystack” and graphwalk scores show it can both hold and use massive context effectively.datacamp+1

  • Claude: Often top on structured reasoning and careful analysis, especially when extended‑thinking is enabled, at the cost of latency and price.anthropic+3

If you’re doing long technical reports or philosophical analysis, Claude usually gives the most coherent, reflective write‑ups; GPT is close and more versatile; Gemini is best when those reports must integrate many files, images, or long transcripts.

2. Coding and software engineering

  • Benchmarks: Multiple reports put Claude 4 Sonnet/Opus at the top of SWE‑bench‑style benchmarks (≈72–75%+, up to ≈80% with parallel compute), with Gemini 2.5 Pro lagging and GPT‑4.1 in the middle.apidog+4

  • Developer workflow:

    • Gemini: Great when your repos live in Google Cloud, and when you need to reason over diagrams, logs, or UI screenshots as part of coding.docs.cloud.google+3

    • GPT: Best “all‑rounder” for IDE integrations, agents, and quick prototypes; wide tooling support (Cursor, VS Code assistants, etc.).glbgpt+3

    • Claude: Favored for refactoring large codebases and debugging subtle issues; popular for “trust it with a big monorepo” tasks.entelligence+4

For your kind of deep, systems‑level coding or analysis, Claude is generally the best first pick; GPT is the second choice for breadth and tools; Gemini is ideal when the code problem is entwined with data, docs, or visual context.

3. Multimodal (text, image, audio, video)

  • Gemini: Strong emphasis on multimodality—image, audio, and video understanding plus generation through related models (e.g., Veo, Imagen) in the Google AI Pro/Ultra bundles.blog+4

  • GPT: Highly capable unified multimodal model embedded in ChatGPT‑5/4.1, handling images, audio, and video in a single conversational flow, with many creative tools wrapped around it.openai+3

  • Claude: Primarily text‑first; image/PDF inputs are supported in many deployments, but it does not yet lean as hard into full video/audio workflows as Gemini/GPT.allmates+3

For drone footage, environmental imagery, and long video or audio logs, Gemini or GPT currently give you more leverage than Claude.

4. Context windows, memory, and “agentic” behaviour

  • Context: Gemini often leads on raw context size (up to ≈2M tokens), GPT offers up to 1M, Claude up to 200–500k depending on tier.datastudios+5

  • Use of context: GPT shows particularly strong ability to actually retrieve specific content from giant contexts; Claude sacrifices raw size but tends to reason deeply over what it has; Gemini sits between, with emphasis on multimodal long‑context workloads.datacamp+3

  • Agents & tools: All three support tool calling and multi‑step workflows; GPT has the most mature ecosystem, Gemini is tightly coupled to Google services, Claude leans into enterprise agents with safety and governance knobs.cloud.google+5

For your “big corpus” hydrology/indigenous‑knowledge research, Gemini and GPT are attractive for sheer context; Claude is ideal when you want slower but more interpretable chains of reasoning over smaller curated chunks.

5. Safety, reliability, and “feel”

  • Gemini: Conservative content filters and Google’s safety stack; good for organizations worried about brand risk but sometimes frustrating for edgy or controversial topics.blog+2

  • GPT: Middle of the road; stronger safety than early GPT‑4 but not as locked‑down as Claude on some sensitive themes.glbgpt+2

  • Claude: Most risk‑averse, with “constitutional AI” and strong refusal behavior; often preferred for regulated sectors and sensitive domains.remio+2

Subjectively: Claude often “feels” like a careful human expert; GPT like a flexible all‑purpose collaborator; Gemini like a powerful multimodal lab instrument wired into Google.


If you tell me your main dimensions (e.g., coding vs. research vs. multimedia vs. philosophy) and whether cost/latency matter, I can give a more pointed recommendation like “for X, default to Claude; for Y, switch to Gemini; use GPT when you need Z.”

Friday, February 13, 2026

Message from Charles Aulds

Donald Trump's first term in office (2017 to 2021) were dedicated to fulfilling four promises that he made to get elected:
  1.  Build an end-to-end border wall with Mexico (and make Mexico pay for it)
  2.  Repeal Obamacare
  3.  Put Hillary Clinton in jail (promised repeatedly during the campaign)
  4.  Eliminate the federal debt if he was given 8 years in office (source)
By the end of Trump's first four-year term in office, only 9 miles of border wall were newly constructed (of only 493 miles of wall constructed under the Trump administration; most were replacement fencing). Mexico did not pay the cost. Hillary Clinton is not in jail and has never even been under indictment for a crime.

Obamacare was never repealed and is more popular than ever (source).

On January 21, 2017 (the day of Donald Trump's first inauguration), the US Federal Debt was $19,947 TrillionCurrently it it over $38 Trillion (verify these numbers). That's an increase of 993%, in nine years.
 

Americans got played by a consummate con man. Twice. Learn to recognize a con artist; this one (I really hate to inform you) was shockingly easy to identify.
___
Charles Aulds
 

Yeh, you were played ...