2025-11-07T15:06:05+01:00
Engineers do not need better search interfaces. The need cheap, nearly-instant evaluation of design, ideas and constantly refined physics. Reducing the overhead in feedback loops is the game.
Simulation is the bottleneck. Legacy solvers, brittle scripts, memory-bound kernels, and scarcely-available experts make iteration slow and fragile. Compute exists; the horizontal glue is missing.
Synopsys now dominates huge swaths of the simulation market after acquiring Ansys. The reality is this though: ML adoption lags behind, engineers struggle with coherent workflows. Every analysis on simulation data and process management (SPDM) shows knowledge workers unsatisfied with existing infrastructure due to brittle setups and difficult-to-bridge silos.
Sovereignty has become a buying constraint. The integrated giants build in-house. Everyone else is stuck between legacy tooling and cloud vendors they can’t trust with trade secrets. Execution infrastructure that runs on customer hardware, trains on customer data, and extracts nothing without permission.
Take an existing PDE foundation model, finetune with 20-100 samples, hit 1% precision. Orders of magnitude speedup. This isn’t speculative; the theory around operator learning is mature. What’s missing is deployment infrastructure that customers actually trust. And people pushing the envelope on the next k iterations of foundation models.
Map reality. Simulation data is IP-sensitive. We need to make it indexable, cheaply understanding geometries, fields units and make the space of explored designs searchable.Build per-customer infrastructure from composable blocks. First efficiency gain: eliminate redundancy in existing design considerations.
Automate and accelerate. Reduce ime-to-solution with finetuned models. Active sampling of design spaces for the problem at hand – either by heuristics or constantly learning models – will enhance the ground truth samples to further enhance the models that enhance engineers’ design iteration time.
UQ at unprecedented scale. Uncertainty Quantification becomes prohibitively expensive. Especially when the requirements imposed by regulators and insurance companies imply heavy penalties when you’re wrong. Let’s run large-scale simulations at inference speed: This allows us to sample the future much more precisely and thoroughly. At a fraction of the cost. What could be better than running frameworks optimized for multiple exaflops of training time such as Pytorch and Jax?
Deploy per domain. The requirements per-industry vary in orders of magnitude. Semiconductor fabs need ångström precision. Automotive needs crash dynamics. Energy needs reservoir modeling. In the end it’s all the same: Integrate a solver environment, introduce better sampling mechanisms, streamline curation and analysis, and slowly allow for fast inference to take over. What’s common is this though: The need for secrecy and sovereignty for each and every customer is paramount. The decision quality however will be enhanced deeply with fast inference pipelines. Build once per customer. IP never leaves.
Further out: Differentiability, transferability and environmental learning. ML models are differentiable. Inverse design becomes tractable and model distillation reduces the sim2real gap. Deploy where Synopsys can’t — edge systems, proprietary processes, regulated environments.
We build the infrastructure, the engineering will follow.
We've spent years in the stack that matters: compiler optimization, GPU kernels, PyTorch internals, and scientific computing. We know how to take foundation models from research to production, debug distributed training at scale, and build systems that handle real engineering constraints.