Active development
The process of designing, coding, and implementing new software features or systems. It involves translating requirements into functional code, applying best practices, and ensuring that new components integrate smoothly with existing systems.
Maturity Levels
|
Level |
Name |
Description |
Technology |
Example tools |
|
0 |
Non-existent |
No AI capabilities are available for development tasks. All code creation, syntax validation, and design decisions are performed manually. There is no AI functionality in tools or processes. |
||
|
1 |
One-off assist. |
AI can generate small code snippets and perform syntax correction. Suggestions can be copy/pasted from standalone tools when needed. There is no integration with IDEs or the development pipeline. |
|
|
|
2 |
Integrated assist. |
AI is embedded within IDEs, providing real-time code completions, syntax checks, and contextual suggestions. It supports boilerplate generation and basic optimizations but does not independently manage architectural or large-scale design tasks. |
|
|
|
3 |
AI-Human collaboration |
AI proposes code implementations, architectural patterns, and optimizations based on project context. It can generate entire functions or classes, adapting to existing coding standards. Humans then review, adjust, and approve AI output through iterative collaboration. |
|
|
|
4 |
Full autonomy |
AI autonomously manages the entire development cycle, including design, coding, and documentation. It interprets requirements, produces optimized solutions, runs validations, and integrates code with minimal human supervision. |
|
|
| AI Maturity Level: Indicates the level the technology vendors claim to have reached in deploying AI solutions that actually work in real-world applications |
Legacy Code Refactoring
The practice of improving the structure and quality of existing code without changing its external behavior. The goal is to increase readability, maintainability, and performance while reducing complexity and technical debt.
Maturity Levels
|
Level |
Name |
Description |
Technologies |
Example tools |
|
0 |
Non-existent |
AI has no capability to assist in legacy code improvement. All restructuring, modernization, and dependency management are performed manually. Code complexity and technical debt are addressed reactively, if at all. |
||
|
1 |
One-off assist. |
AI can suggest small refactorings such as renaming variables, removing unused code, or simplifying isolated logic. Actions are triggered manually and applied without system-wide context. |
|
|
|
2 |
Integrated assist. |
AI assists consistently in detecting code smells, modernizing syntax, and improving maintainability. It can suggest changes across modules and integrate refactoring recommendations into IDEs or CI pipelines. |
|
|
|
3 |
AI-Human collaboration |
AI analyzes full codebases, identifies systemic issues, and proposes major refactoring strategies. Developers collaborate by validating impacts, resolving conflicts, and adjusting AI-generated solutions to align with business goals. |
|
|
|
4 |
Full autonomy |
AI autonomously identifies legacy code, creates a refactoring plan, executes changes across the system, and validates functionality through automated testing. It maintains compatibility and performance without human intervention. |
|
|
| AI Maturity Level: Indicates the level the technology vendors claim to have reached in deploying AI solutions that actually work in real-world applications |
Security Assessment
The evaluation of software systems to identify vulnerabilities, weaknesses, and compliance issues. It includes analyzing code, configurations, and dependencies to ensure the application meets security standards and protects sensitive data.
Maturity Levels
|
Level |
Name |
Description |
Technologies |
Example tools |
|
0 |
Non-existent |
AI provides no security analysis. All vulnerability checks, threat modeling, and mitigations are manual or handled by non-AI tools. |
||
|
1 |
One-off assist. |
AI can scan code for common vulnerabilities and report results on demand. These assessments are reactive and limited to specific requests or periodic checks, with results manually interpreted. |
|
|
|
2 |
Integrated assist. |
AI continuously scans for known security risks and dependency vulnerabilities as part of CI/CD pipelines. It suggests remediations but does not autonomously implement them. |
|
|
|
3 |
AI-Human collaboration |
AI prioritizes vulnerabilities, predicts potential attack vectors, and recommends mitigation strategies. Developers collaborate to verify findings, approve remediations, and refine AI-driven security improvements. |
|
|
|
4 |
Full autonomy |
AI independently monitors systems, detects vulnerabilities, applies patches, and validates security measures. It dynamically adapts to new threats and maintains compliance without human involvement. |
|
|
* A lot of vendors advertise that their AI agents are fully autonomous at discovering security breaches and fixing them. But the actual results are often “showcases” that do not really translate into real-life situations.
| AI Maturity Level: Indicates the level the technology vendors claim to have reached in deploying AI solutions that actually work in real-world applications |
Unit tests generation
The creation of tests that verify the correctness of individual units of code. These tests help ensure that small parts of the software behave as expected and continue to do so over time, preventing regressions during future development or system changes.
Maturity Levels
|
Level |
Name |
Description |
Technologies |
Example tools |
|
0 |
Non-existent |
AI offers no capability for generating or maintaining unit tests. All test creation and updates are manual. |
||
|
1 |
One-off assist. |
AI can generate unit tests for individual functions or classes when requested. It creates simple assertions and edge cases without deep code context. |
|
|
|
2 |
Integrated assist. |
AI can consistently produce unit tests as new code is written, expanding coverage beyond simple cases. It creates meaningful inputs and expected outputs but does not adapt tests as the code evolves, unless asked to. |
|
|
|
3 |
AI-Human collaboration |
AI generates comprehensive test suites, identifies gaps in coverage, and suggests additional scenarios, including edge cases and integration points. Humans validate and refine tests based on business logic and critical paths. |
|
|
|
4 |
Full autonomy |
AI autonomously creates, updates, and maintains test suites throughout the software lifecycle. It ensures complete coverage, adapts tests as code changes, and validates results without human intervention. |
|
|
| AI Maturity Level: Indicates the level the technology vendors claim to have reached in deploying AI solutions that actually work in real-world applications |
