My app uses AI for several features — scoring ideas, categorising experiments, generating summaries, and a streaming chat. All of it was built on PrismPHP, a community package that wraps multiple AI providers. When Laravel released its official AI SDK (laravel/ai), we migrated.
The migration PR looked clean — 22 files changed, 484 lines removed net. But when I tested locally, two agents were silently broken.
What changed architecturally
Before: One generic PrismAIService class that every AI feature called through. Each action built its own Prism request inline:
$response = $this->aiService->generateText($systemMessage, $userMessage);
After: Each AI task is a self-contained Agent class with declarative configuration via PHP attributes:
#[Provider(Lab::OpenAI)]
#[Model('gpt-5-mini')]
#[MaxTokens(4096)]
class TitleGeneratorAgent implements Agent
{
use Promptable;
public function instructions(): Stringable|string
{
return 'You are a title generator for marketing experiment ideas...';
}
}
Actions now instantiate the right agent and call ->prompt():
$response = (new ExperimentSummaryAgent($systemMessage))->prompt($userMessage);
Testing got simpler too — Agent::fake() with plain arrays instead of building fake response objects:
// Before (Prism)
Prism::fake([
StructuredResponseFake::make()
->withStructured(['channel_id' => 7, 'reasoning' => '...'])
->withFinishReason(FinishReason::Stop)
->withUsage(new Usage(100, 50)),
]);
// After (Laravel AI SDK)
CategoryClassificationAgent::fake([
['channel_id' => 7, 'reasoning' => '...'],
]);
Gotcha 1: Not all models support temperature
The PR added #[Temperature(0.7)] to every agent. But gpt-5-mini and gpt-5-nano don’t support the temperature parameter — OpenAI returns a 400 error.
just ran test 1 — all looks good. moved on to test 2 and seem to be hitting a few issues — the idea name was ‘campaign name pending’ and score was never populated which makes me think IdeaScoringAgent and TitleGeneratorAgent are not running correctly
The scoring agent, title generator, experiment summary agent, and design considerations agent all used models that don’t support temperature. The fix was removing the attribute from those four agents — only gpt-5.2 (used by the chat and category agents) supports it.
Gotcha 2: Strict mode requires withoutAdditionalProperties() on nested objects
The IdeaScoringAgent returns structured output — a JSON array of scores. OpenAI’s strict mode requires additionalProperties: false on every object in the schema, including nested ones. The Laravel AI SDK only sets this automatically on the root object.
Before — nested object missing the constraint:
public function schema(JsonSchema $schema): array
{
return [
'scores' => $schema->array()->items(
$schema->object([
'id' => $schema->integer()->required(),
'score' => $schema->integer()->required(),
'description' => $schema->string()->required(),
])
)->required(),
];
}
After — adding withoutAdditionalProperties() to the nested object:
$schema->object([
'id' => $schema->integer()->required(),
'score' => $schema->integer()->required(),
'description' => $schema->string()->required(),
])->withoutAdditionalProperties()
Without this, OpenAI rejects the request silently in some cases, or returns unexpected results.
Gotcha 3: catch(Exception) vs catch(Throwable)
Several action classes used catch (Exception $e), which only catches deliberate exceptions your code raises. It doesn’t catch Error — deeper problems PHP itself raises, like type mismatches or calling a method on null.
I don’t understand the pros and cons of catch Throwable vs catch Exception? what would Taylor Otwell do here?
The answer, backed by Laravel’s own source code:
| Pattern | Occurrences in Laravel |
|---|---|
catch (Throwable ...) | 65 |
catch (Exception ...) | 26 |
The convention: catch broadly with Throwable, throw specifically with a named exception class. For AI actions where you want to log the error and not crash, catch (Throwable $e) is the right choice:
try {
$response = (new DesignConsiderationsAgent($systemMessage))->prompt($userMessage);
$idea->update(['plan' => str()->markdown($response->text)]);
} catch (Throwable $e) {
Log::error('Failed to generate design considerations', [
'error' => $e->getMessage(),
'idea_id' => $idea->id,
]);
}
The log messages were also updated from provider-specific (Failed to fetch response from OpenAI) to provider-agnostic (Failed to generate design considerations) — since the whole point of the migration was to abstract away the provider.
The migration by the numbers
| Before (Prism) | After (Laravel AI) | |
|---|---|---|
| AI service class | 1 generic (204 lines) | 6 focused agents |
| Configuration | Runtime (constructor args) | Declarative (PHP attributes) |
| Test faking | Prism::fake() with response builders | Agent::fake() with plain arrays |
| Dependencies | prism-php/prism + openai-php/client | laravel/ai |
| Lines changed | — | +1,169 / -1,653 (net -484) |
The SDK is pre-1.0 (v0.2.5), so the API may change. But the agent-per-task pattern and attribute-based configuration already feel more Laravel-like than a single service class handling everything.