Entry before the entries

Your product is good. Your team is capable. Something, somewhere, has stopped moving.

You shipped an AI feature and nobody uses it. The roadmap keeps promising the same three things. You added tests, coverage, analyzers — the project is still broken. You paid an expensive consultant and got two weeks of clarity before gravity resumed. You have more contracts than code. You are doing the work, and the work is getting harder, and nobody on the outside quite believes you.

These are not moral failures. They are structural patterns. Teams that looked exactly like yours have walked into each of them — not because they were careless, but because the usual advice is decades old and the tools are eighteen months old and the gap is where projects die.

What follows are four field notes. Four situation types, written the way a practitioner writes them in the margins of a notebook: Situation · How it usually ends · What a different approach looks like. If any of them describes the room you are sitting in, there is a conversation worth having.

If this sounds like you, start a conversation

01

This is the can’t-leave-the-laptop pattern — it shows up everywhere AI has a board mandate and no product owner.

The AI pilot that can’t leave the laptop

Situation

AI is obviously important. Management has said so, loudly, since last spring. A small team has built a promising prototype — a ChatGPT wrapper, a retrieval demo, something that works on one engineer’s machine with a paid API key. Everyone agrees it should be in the product by Q3. No one has said what problem it is supposed to solve for which customer, on which data, at what reliability. The vendor demos keep promising the same three things.

How it usually ends

The prototype never leaves the laptop. The budget line survives one more quarter by being renamed. Two years later, someone cheerful is hired to start an AI initiative and the cycle restarts.

What I’d do differently

Begin with an AI Adoption Review: a short diagnostic that answers three questions in writing. What, in your product, does AI make measurably better for a real user? What data, infrastructure and evaluation loop does that require? What is the smallest thing we can ship in six weeks that you can defend in a board meeting? No platform pitch. No framework purchase. The output is a shortlist of AI moves that fit your product, your stack, your team — each with effort and risk estimates. Then — only then — hands-on engineering to bridge the gap from capable to shipped, pair-programmed with your people so the know-how stays when I leave.

02

Fifty hands cost more than ten minds — in the end, and sometimes in the middle.

Fifty developers, none of them yours

Situation

The pressure is real. A roadmap item is late, a competitor is closer than anyone wants to admit, and the cheapest way to look like you are moving is to buy capacity. A staffing agency sends a deck with smiling faces. An offshore partner promises a bench. Suddenly there are fifty people on the project, seven Slack workspaces, three time zones, and one exhausted tech lead trying to keep a mental map of who is allowed to merge what.

How it usually ends

Quality becomes a lottery. When the contract ends, the knowledge leaves with it. Your in-house team — the people who actually pay for the roof — is quietly worse at the product than before.

What I’d do differently

Software craftsmanship is not an arbitrage play. The move is an In-House Upskilling Sprint: pair-program with the senior half of your team for a quarter, on a real problem that is already on the roadmap. Code reviews that teach rather than gate. Architecture decisions written down where the team can defend them. Dedicated spikes on the techniques they don’t yet have — AI tooling, DSLs, functional architecture where it earns its keep. You bring in the turbo-boost; your people keep the result. After a quarter, the product carries your team’s fingerprints, not a consultant’s.

03

Green dashboards on cold projects — the saddest lie in our industry.

Coverage at 92%, shipping at zero

Situation

A team has been doing everything right, or everything that sounds right. Unit tests added. Coverage climbing. Static analyzers adopted. Dashboards set up, and the dashboards are green. And yet features take three sprints. Release day is a ritual of incantations. Nobody refactors anything central because nobody remembers why it’s central. The project is still broken; it has been metrically broken for six months.

How it usually ends

Someone proposes a rewrite. The rewrite is approved partially, funded partially, staffed with whoever is available. Eighteen months later, the rewrite is a new legacy with slightly newer dependencies.

What I’d do differently

When a project has fallen in the well, ropes from above are the answer — not more metrics. An Architecture Second Opinion, written: what’s working, what’s decaying, what to cut and what to keep. The pragmatic question is always the same: does this change produce value now, or over time? If neither — cut it. Then a Pragmatic Delivery Review to identify the two or three practices that actually produce value and recommend what to drop. You will not get a rewrite pitch from me unless a rewrite is genuinely cheaper than the rescue — which is rare. The goal is software that stays soft: changeable, understandable, and cheap to move.

04

The “two-week shine” — it costs twice once you count the loss of faith afterwards.

Two weeks of clarity, then gravity

Situation

You paid for the workshop. You hired the well-known consultancy. For two weeks, everything was inspired. There was a deck. There was a slack channel called #transformation. Someone said the word psychological safety in a meeting and nobody laughed. Then the normal gravity resumed. The certificates went in the drawer. The ceremony schedule came back. The budget line is smaller, and the problem is exactly the same size as before.

How it usually ends

Quiet. A year later, a new consultant arrives with a new deck. The team has learned, correctly, not to get too excited this time.

What I’d do differently

A useful engagement ends with something your team owns — not a certificate, not slide 42. So every shape I offer is built around that rule. Training on your codebase, not on a sample repo. Consulting that leaves a written argument — a decision they can defend after I’m gone. Engineering pair-programmed with your people, handed back fully documented. If a piece of the work can’t be handed off, I say so before the kick-off. The measure of success is simple: can your team ship the next thing without me? If yes, it worked. If no, it didn’t — regardless of how good the two weeks felt.

How we work

Three services, five shapes, one rule.

Training

Workshops, live-coding, conference talks — delivered on your code, not a toy repo.

Consulting

Audits, architecture reviews, sparring, decision support — written down so the team can defend it later.

Engineering

Hands-on AI pipelines, tooling, DSLs — built at product quality from minute one and handed back documented.

  1. AI Adoption Review — diagnostic for teams that know AI belongs on the roadmap but aren’t sure where. Output: a defensible shortlist.
  2. In-House Upskilling Sprint — short, high-bandwidth engagement that leaves your developers able to ship the next thing without outside help.
  3. Architecture Second Opinion — neutral, time-boxed read of where the system is heading. A written argument, not a deck.
  4. Pragmatic Delivery Review — for teams stuck in ceremony theater. Start from the Agile Manifesto, not the framework textbook.
  5. Hands-on Engineering — selective; when the problem is nested enough that a tool-hire makes sense, I build, pair, and hand back.

The rule: every engagement ends with something your team owns.

Proof

Things I’ve already built and shipped — because I’d rather show than claim.

  • Recognized F# Expert, F# Software Foundation — Applied F# 2019. foundation.fsharp.org
  • FsHttp — invited by Don Syme (creator of F#) to the official fsprojects organization. 499★, 128 dependent packages. fsprojects/FsHttp
  • TypeFighter — a research language with structural types and inference-first design. SchlenkR/TypeFighter
  • BobKonf 2024Computation Expressions in F#, full tutorial track. bobkonf.de
  • Recurring features in F# Weekly (Sergey Tihon, Microsoft MVP).
  • Co-host at Amplifying F# — community format with G-Research OSS.

For the curious

If you’re curious who’s behind this page

Outside client work, I build. PXL Clock is a 24×24 programmable LED display I co-founded with Sefa — engineered end-to-end by two people, shipping in limited batches from Frankfurt. pxlclock.com

TypeFighter is my experimental programming language — a modern, inference-first type system where records match by shape, not by declared name. Research-grade, explained end-to-end. github.com/SchlenkR/TypeFighter

@ThePureState on YouTube is my channel for the longer arguments — language design, functional programming, AI workflows, the craft that underlies the consulting. A good place to start: How To Make a Programming Language. youtube.com/@ThePureState

And plenty of other work on GitHub — FsHttp (499★, the F# HTTP library), Trulla (type-safe templates), TheBlunt (parser combinators), LocSta (stateful stream processing).

Objections, answered

Before you write.

Sounds expensive.
Less than hiring a senior full-time. Scoped explicitly; no retainer trap.
Can you do this remotely?
DACH and remote EU, both supported. Onsite workshops possible for team-facing engagements.
We already have an agency.
An agency builds more capacity. I make your existing team capable. Different job.
We don’t use F#.
Good — most of my client work is C#/.NET, TypeScript, and the mix everyone actually has. F# is how I think; it’s not a prerequisite.
We only need help for two weeks.
Then you don’t need me. I’m interested in engagements that leave something standing after I’m gone.

About — last

Why me, for this problem.

Ronald Schlenker — fifteen years in .NET, creator of FsHttp, TypeFighter, and several other OSS libraries the F# community uses. Recognized F# Expert (F# Foundation, 2019). Co-founder of the PXL Clock — a programmable hardware product that is itself a working example of pragmatic engineering: small team, in-house discipline, shipped without a framework textbook.

The reason this page is written the way it is: every time I see a dying software project, it died the same way — buzzword compliance replacing engineering judgement. The consulting I sell is the opposite of that.

Based in Frankfurt. Work with DACH and remote-EU teams. Trading as PureState IT Consulting.

If one of these field notes describes the room you are sitting in —

Start a conversation

hello@schlenkr.dev