The Lin Lab combines principles from chemistry and engineering to develop foundational methods for observing and controlling biological events in real time. In its emphasis on customizing protein-ligand affinity and reaction kinetics, our approach can be termed "synthetic biochemistry". In particular, we have been pioneering the engineering and use of photoproteins and viral proteases to develop next-generation molecular diagnostics and therapies. Click on each image to learn more about our projects.
Fluorescent protein-based technologies
Protease-based technologies
An essay by Michael Lin, Stanford University
In systems and circuit neuroscience, the dominant experimental paradigm of the last decade has been imaging of neural activity using genetically encoded voltage and calcium indicators, and/or control of genetically defined neuronal populations using light-actuated ion channels in the form of microbial opsins. These technologies, which can collectively be termed optogenetic methods, have transformed the field — but accessing them requires a trainee to first become an expert surgeon. Before stimulating or recording a single neuron, a new graduate student or postdoc must master the complex multi-step process of expressing these genetically encoded actuators and/or reporters, and implanting windows or lenses for optical access.
At a granular level, this work involves coordinating viral packaging at an AAV production facility; characterizing viral titers and comparing lot-to-lot variability across facilities; optimizing injection volumes, coordinates, and injection speeds; choosing and testing combinations of promoters, Cre driver lines, and Cre-dependent reporters to achieve cell-type specificity. Finally, the researcher must implant the appropriate optical element — a cranial window, gradient-index (GRIN) lens, prism, or fiber optic cannula — with sufficient precision to image the target population without causing brain damage that interferes with the brain function that is being studied.
Each of these steps involves a steep learning curve, and they interact. A well-packaged virus injected at the wrong coordinates, or a correctly placed window over tissue with poor indicator expression, means starting over. In practice, trainees routinely spend one to three years — a substantial fraction of a PhD, and a large fraction of a postdoc — simply acquiring the surgical and technical competency to generate usable imaging data. This is not an exaggeration: it is the common experience of neuroscience trainees across the field.
What is striking is that this reality is widely recognized but rarely named explicitly. It is, in effect, the elephant in the room of systems neuroscience training: nearly every practitioner knows that an enormous fraction of trainee effort is devoted not to scientific reasoning but to mastering a narrow and fragile surgical preparation. Yet because no clear, scalable alternative has existed, the field has adapted by normalizing this cost. Principal investigators depend on trainees to carry out these procedures, and trainees, in turn, are motivated to persist by the expectation that this investment is unavoidable. The result is a kind of collective silence — not because the problem is subtle, but because, until now, it has not seemed solvable.
The consequences compound in ways that are rarely discussed explicitly. First, the time cost is simply lost: these are years not spent testing hypotheses, developing theories, or making conceptual progress. Second, and more insidiously, trainees become locked in. Having invested years mastering a preparation for a specific brain region — say, primary visual cortex or dorsal CA1 — researchers face enormous friction when a scientific question requires moving to a new structure. Learning a new surgery from scratch resets much of that investment. The rational response is to stay put, to frame new scientific questions as extensions of the region one already knows how to image. The result is a structural bias in what questions get asked: the map of the brain that gets studied is shaped less by where the important questions are and more by where people already know how to point a microscope.
One might ask: why don't labs simply train technicians to handle surgical preparation? Some large, well-resourced labs do maintain dedicated surgery staff. But this is not scalable across the field; most labs lack the resources, and even those that have a technician typically have one person who covers one or two preparations. The deeper problem is that the expertise required is not just technical dexterity. Rather, it requires genuine scientific judgment to troubleshoot, e.g. to distinguish a failed injection from a failed viral batch from a misplaced implant or to optimize across the many interacting variables. A single technician cannot feasibly maintain expertise across a comprehensive range of brain regions, cell types, indicator choices, and optical configurations.
The natural market solution would be a contract research organization (CRO) that offers ready-to-image mice as a catalog product: animals already expressing the desired indicator in the specified cell type and brain region, with the optical element implanted and verified, shipped to the researcher's lab for immediate imaging. This would allow scientists to skip the surgical training entirely and proceed directly to the experimental question. However, no such company currently exists. The reason, I hypothesize, is a talent supply problem compounded by a coordination problem. The scientists capable of performing this work in one brain region are either those who are motivated to pursue a career as a lab head, or are able to obtain high-paying stable jobs in industry. That is, people with deep expertise in viral vector biology, stereotaxic surgery, optics, and indicator characterization have the skills to run their own research project, even if mostly restricted to the combination of elements that they were trained in. If they prefer not to pursue an academic career, they can join a company. A CRO that wants to offer surgery and optogenetic element expression services for a fee to academic labs would also need to begin with high credibility and good publicity, both to hire qualified people and to encourage potential customers to try them. The result is that no organization has yet, to my knowledge, attempted to offer such services.
This is a classic problem of coordination and activation energy. Each individual lab cannot unilaterally solve it; the solution requires aggregating demand and expertise simultaneously, and no existing institution has enough funding and influence to set this up from scratch.
My hypothesis is: a dedicated Optogenetic Foundry (OGF) that provides ready-to-image mouse preparations with built-to-order optogenetic and optical elements would greatly accelerate neuroscience discovery, improving reproducibiliy and removing barriers to studying any brain region. OGF would employ a team of expert surgical scientists, provide them a stable runway, and reward them with industry-equivalent incentives as well as co-authorships. OGF can then build a sufficiently broad catalog of preparations to be economically viable. The hypothesis is that the per-preparation cost to the field (including the true cost of trainee time) will be dramatically lower than the status quo once OGF is up and running.
This hypothesis can be tested in stages:
Stage 1 — Demand validation: Survey active imaging labs on willingness to pay for specific preparations, prioritized by scientific demand. Which cell types and brain regions represent the highest unmet need? What price points are feasible given existing lab budgets? This pilot would cost little and could be done in months.
Stage 2 — Proof of concept production: Hire a small founding team (3–5 scientists with complementary regional expertise), identify the 10–15 highest-demand preparations, and produce and ship a pilot cohort of mice to partner labs. Rigorously measure: imaging quality, animal-to-animal consistency, and the time-to-first-usable-data compared to labs doing the preparation in-house.
Stage 3 — Catalog expansion and cost modeling: If pilot preparations meet quality thresholds, model the scaling economics. How does per-mouse cost fall as volume increases? At what catalog breadth and order volume does the operation become self-sustaining? What compensation and credit structures are required to retain expert staff?
Falsification criteria: If demand surveys reveal that labs would not pay a price sufficient to cover production costs at any feasible volume, or if shipped mice consistently fail to meet quality thresholds that in-house preparations achieve, the hypothesis is falsified and the OGF can be unwound, or substantially redesigned in response to new market intelligence.
Several converging developments make this moment uniquely favorable. The optogenetic reporter toolkit has matured: jGCaMP8 variants, ASAP voltage indicators, and soma-targeted constructs have reached quality and reliability thresholds where standardization is genuinely feasible, and the new HcKCR and ChRmine opsins produce order-or-magnitude higher negative and positive photocurrents than previous opsin-based tools. Privately funded companies such as Science Inc have recently engineered even more effective opsins. The AAV manufacturing ecosystem has professionalized, with multiple facilities such as Addgene and the Stanford Gene and Viral Vector Core capable of producing well-characterized preparations consistently; these facilities can become reliable suppliers to the OGF. Transgenic Cre driver lines are now comprehensive and well-annotated (Allen Institute lines, GENSAT lines), meaning cell-type targeting strategies can be systematized across regions rather than reinvented for each.
At the same time, the scientific case for comparative and multi-region studies has never been stronger. Questions about how computations are implemented across circuits — whether grid-cell-like representations are truly unique to entorhinal cortex, how prefrontal control signals propagate through subcortical structures, how neuromodulatory dynamics differ across regions during learning — require exactly the kind of flexible, multi-region access that the current training bottleneck systematically prevents. The field has the tools to ask these questions; it lacks the infrastructure to deploy them efficiently.
There is also a workforce moment. The large cohort of postdocs trained over the past decade in two-photon and fiber photometry imaging are moving into career transitions. Some are well-positioned to be founding scientists of exactly the kind of organization described here, if the organization exists and can offer a compelling professional path. The supply of qualified talent, while not unlimited, is higher now than it has been or may be again.
If OGF succeeds, the effect would be to decouple scientific creativity from surgical geography. A theorist who wants to test a computational model in a region they have never worked in could do so in weeks rather than years. A lab studying motor cortex could extend their work to cerebellum without rebuilding their technical stack. Early-career scientists could pursue the most interesting questions rather than the questions accessible to their current preparation. Labs in under-resourced institutions, which otherwise would lack the time or funding to support years of surgical training, could participate in experimental neuroscience at the frontier. The OGF would thus have wide and broad positive impact across the field intellectually, and across the country geographically.
The neuroscience optogenetic preparation bottleneck described here is not glamorous, and it does not appear in grant applications or strategic planning documents. It lives in the gap between what trainees are officially learning and what they spend most of their time doing. But setting up the Optogenetic Foundry is a straightforward structural solution that can truly revolutionize how neuroscientists perform their research. Similar to how the Human Genome Project allowed scientists more time to perform actual research, rather than mindlessly running and reading sequencing gels, the Optogenetic Foundry can free neuroscientsts from rote time-consuming and technically challenging surgeries to concentrate on doing novel and rigorous experiments. They will spend more time gathering data and thinking about the brain and less time struggling with poor preparations and repeating experiments. The result will be a permanent step change in creativity, quality, and productivity in neuroscience.
If understanding how the biological computer of the brain works is interesting to you — if you've ever wondered how it generates every sensation, thought, and emotion and how it might skip a gear or two as it ages — and you have ideas for how to get an Optogenetic Foundry up and running, feel free to contact me.