Skip to content
Home / Glossary / Glossary

Model Card

A model card is a structured document that describes an AI model's intended use cases, performance characteristics, evaluation results, training data sources, known limitations, and ethical considerations. Introduced in a 2019 Google research paper by Mitchell et al., model cards have become the standard documentation artifact for responsible AI deployment and are increasingly required by enterprise customers, regulators, and acquirers during technical due diligence.

A model card is a structured document that describes an AI model’s intended use cases, performance characteristics, evaluation results, training data sources, known limitations, and ethical considerations. Introduced in a 2019 Google research paper by Mitchell et al., model cards have become the standard documentation artifact for responsible AI deployment and are increasingly required by enterprise customers, regulators, and acquirers during technical due diligence.

A model card serves as the primary documentation interface between an AI model’s developers and the people who will deploy or evaluate it. In contrast to technical papers or API documentation that describe how a model works, a model card describes what the model is for, how well it performs across different use cases and populations, and what can go wrong when it is deployed outside its intended scope. The distinction matters because AI products that perform well in aggregate often perform unevenly across subgroups, languages, or input types, and those performance disparities are exactly the information that enterprise customers and regulators are most likely to request.

For AI companies undergoing M&A or fundraising due diligence, the presence or absence of model cards is a governance maturity signal. Acquirers conducting technical due diligence in 2025 and 2026 have increasingly standardised the request for model cards alongside model weights, evaluation datasets, and training data schedules.


Standard Model Card Components

A complete model card typically addresses seven areas:

1. Model description. The model’s name, version, architecture (transformer, diffusion, classification), input and output modalities, and the team or organisation that developed it. A model description section should identify whether the model is built on a foundation model base, and if so, which one and under which license.

2. Intended use. The use cases the model was designed and evaluated for, and the use cases it was explicitly not designed for. An intended use section written with specificity is more valuable to a potential acquirer than a broad statement because it defines the scope of the company’s evaluation program.

3. Factors. The variables that are expected to affect model performance: demographic groups, languages, geographic contexts, hardware environments, or input distribution characteristics. A factors section for a document processing AI might identify that the model was evaluated on English and Japanese but not Korean or Thai documents, and that performance on scanned PDFs with handwritten annotations was not benchmarked.

4. Metrics. The quantitative performance measurements used to evaluate the model, including the choice of metric (accuracy, F1, BLEU, ROUGE, AUC-ROC), the test datasets used, and the results by relevant subgroup where applicable. A metrics section should report disaggregated performance, not only overall averages, particularly for models deployed in high-stakes applications.

5. Evaluation data. The sources of the data used to evaluate the model, their collection method, and whether the evaluation dataset represents the full distribution of deployment conditions. Evaluation data sourced from the same distribution as training data overstates likely deployment performance.

6. Training data. The sources of training data, any processing or filtering applied, and any known limitations in the training distribution that may affect deployment. For AI companies whose product differentiation depends on proprietary training data, the training data section of a model card requires careful balancing: sufficient disclosure to satisfy due diligence, without revealing competitive information that would help a competitor reproduce the training program.

7. Ethical considerations and known limitations. The failure modes the development team has identified, including bias risks, adversarial vulnerabilities, and performance degradation under distribution shift. A model card that documents known failure modes honestly signals a mature development team; a model card that claims no known limitations signals the opposite.


Model Cards in AI M&A Due Diligence

The APAC AI M&A market has developed a consistent set of technical diligence requests that go beyond traditional software M&A. Model cards have become a standard item in that request list because they surface several issues that acquirers need to understand before pricing a transaction.

IP and training data provenance. A well-written model card’s training data section identifies whether the training corpus used licensed data, open-source datasets, proprietary customer data, synthetic data, or some combination. This is directly relevant to the IP representations and warranties the seller will be required to make in a purchase agreement. Acquirers increasingly request that training data be categorized by source and license type in the transaction’s AI IP schedule, which is easier to produce if the model card already contains this information.

Performance under APAC deployment conditions. An AI model evaluated exclusively on English-language inputs may perform materially differently when deployed for Japanese, Korean, Chinese, or Thai-language users. Model cards that document multilingual evaluation results provide the acquirer with early evidence of whether the model’s claimed performance is relevant to APAC deployment conditions. Acquirers discovering significant performance gaps in APAC languages during post-acquisition integration have consistently described this as among the most costly surprises, both in engineering effort to remediate and in customer trust damage during the transition period.

Regulatory compliance readiness. Several APAC regulatory frameworks are moving toward requirements that AI systems used in high-stakes applications be accompanied by documentation analogous to model cards. Singapore’s Model AI Governance Framework (updated in 2024) includes guidance on AI system documentation requirements for financial institutions. Japan’s AI Act framework and the EU AI Act, which applies to cross-border transactions where EU users are involved, include documentation requirements for high-risk AI systems that model cards help satisfy. A company with a mature model card program is more readily compliant with these frameworks than a company where model documentation is informal.

Absence of model cards as a governance signal. AI companies that have not developed model cards for their production models typically have not built the evaluation infrastructure that would allow them to write accurate model cards. The absence of model cards is therefore often a proxy for absence of systematic evaluation: no regular performance benchmarking, no disaggregated performance analysis, no structured process for identifying and documenting failure modes. This evaluation gap creates meaningful integration risk for an acquirer expecting to deploy the model in new markets or use cases post-acquisition.


Model Cards and APAC Regulatory Context

APAC regulators are at varying stages of incorporating model card requirements into their AI governance frameworks:

Singapore (Monetary Authority of Singapore / PDPC). MAS’s FEAT (Fairness, Ethics, Accountability, Transparency) principles for AI in financial services, and the more specific guidance in the MAS Veritas framework, are operationally consistent with model card requirements. Singapore-based AI companies applying for MAS financial services licenses or responding to MAS technology risk management guidelines are increasingly expected to produce documentation covering the topics that model cards address.

Japan (Ministry of Economy, Trade and Industry). Japan’s AI Guidelines for Business (updated in 2024) and the METI-led AI governance framework explicitly recommend documentation of AI system characteristics, intended use, and known limitations for AI deployed in business contexts. While not legally mandatory, compliance with these guidelines is effectively required for enterprise customer procurement in large Japanese organisations.

China (MIIT / CAC). China’s Generative AI Service Regulations (effective August 2023) require providers of AI-generated content services to document training data sources and conduct security assessments. The security assessment framework is broader than a model card but includes model card-type requirements for training data documentation and content safety evaluation.

Australia (OAIC / Digital Platforms Inquiry). Australia’s AI Ethics Framework and the OAIC guidance on privacy in AI development both recommend the kind of transparency documentation that model cards provide. Australia’s Privacy Act amendments, moving through parliament in 2024-2025, include provisions that may formalise documentation requirements for AI systems that process personal information.

Korea (PIPC / Ministry of Science and ICT). Korea’s Personal Information Protection Commission has published guidelines on AI privacy protection that include documentation requirements consistent with model card best practice, particularly for AI systems that process biometric data or make automated decisions affecting individuals.


Writing Model Cards for Transaction Readiness

AI companies preparing for a fundraising or M&A process should treat model card development as part of pre-transaction preparation, not as a reactive response to diligence requests. A model card developed under time pressure during a due diligence process will typically be incomplete, internally inconsistent, or overly defensive in ways that raise more questions than they answer.

The recommended preparation sequence is to document all production models in a consistent internal format, conduct a performance audit that includes at least the primary evaluation metrics disaggregated by the most significant subgroups (language, geography, customer tier), and review the training data inventory for completeness and license documentation. That work takes weeks if undertaken proactively and months if initiated in response to a due diligence request.

Amafi Advisory advises AI company founders on sell-side M&A and fundraising in Asia Pacific. For AI companies where technical governance documentation, training data IP, and foundation model licensing questions are relevant to transaction positioning, get in touch to discuss how to structure the technical due diligence preparation.

Related terms

foundation model fine tuning synthetic data inference cost embeddings