Select interests Modernizing platforms Modernizing applications Automating IT Transformation Get your results "*" indicates required fields Your Ready-to-Accelerate Assessment begins here Select each area that applies to your business goals and answer key questions picked by Red Hat experts - get a custom report to help accelerate your efforts to modernize your organization. The estimated time to complete each section is 8-10 minutes.Interests* Modernizing platforms Ensure your organization is poised to provide the platform flexibility and scalability to manage demand. Modernizing applications Understand how you can modernize your development and address the best choices for future deployment. Automating IT Determine the level of effort required to expand and integrate automation in your organization. Transformation Learn about safeguarding your organization’s longevity through continuous innovation. Modernizing platforms Page 1/3: EnvironmentWhat is your department’s / organization’s main responsibility? Infrastructure delivery Software Engineering Operations and software/infrastructure support What is the size of your department? 1-20 21-50 51-100 100+ Which of the following practices/methodologies are implemented in your organization?Select all that apply. Waterfall Agile DevOps DevSecOps How is software and infrastructure deployed in your organization? Manually With some basic automation (e.g., scripts) With automation (e.g., Ansible, Terraform, Puppet, Chef) With advanced automation using infrastructure-as-code principles (e.g., Ansible + CI/CD) Modernizing platforms Page 2/3: DevelopmentOn which of the following does your development process allow you to run workloads?(select all that apply) IaaS (Infrastructure-as-a-Service) / Virtual Machine (VM) only PaaS (Platform-as-a-Service) / container platform SaaS (Software-as-a-Service) FaaS (Function-as-a-Service) Which option best describes your current testing process? Manual Automated builds / continuous integration Continuous deployment Test-driven development Do you have influence on the development platforms your company purchases or uses? I have little to no influence. I have some influence. I am part of the decision-making team. How quickly can you get code from development to production? Minutes Hours Days A week or longer What is your typical application release cycle? Daily Weekly Monthly Quarterly Longer cycles Where is the development platform that you use most often for your company? Laptop VDI (virtual desktop infrastructure) Cloud-based integrated development environment (IDE) Modernizing platforms Page 3/3: OperationsHow many clouds (private, public) does your organization currently use? None 1 to 2 3 or more How are you managing your cloud(s) resources?(select all that apply) Not applicable Using the cloud vendor's web UI Consuming the cloud vendor's API Using a centralized cloud management platform (CMP) Fully automated, so no user-to-cloud interaction needed What is your lead time for change?(For example, how long does it take to add capacity?) Hours Days Weeks Months What is your strategy to migrate in-house workloads to the cloud? We have no strategy. We’re re-platforming (moving pockets into the cloud—private or public cloud). Our primary strategy is to run a private cloud. We intend to move as much as possible to the public cloud. We intend to operate in a hybrid cloud model. Modernizing applications Page 1/3: EnvironmentHow are software projects organized and delivered? Follow a sequential process of analysis, design, implementation, testing, and release Some iterative practices within specific teams, supporting delivery of larger programs An iterative delivery approach with a product backlog, definition of done, and retrospectives, with a hand-off to operations that is largely independent of software development Business, application development, and delivery have aligned agile practices, ownership of production is within a cross-functional application team Agile process with quantifiable metrics based on systems in production, fast feedback loops, designed to verify and quantify strategic goals and desired business outcomes What best describes the level of governance and control within your organization? No commonly implemented change policy, new changes can be introduced at any point through software delivery life cycle Change policy is sometimes bypassed for expediency, or change seen as critical to success Standard processes and architectures have developed over time, with addendums to support some new technologies and capabilities, but lack of ownership has led to complications and a low understanding of the rationale in some cases Common approaches support the delivery and architecture across the software delivery life cycle, incorporating good levels of transparency and adaptability Processes in place allow measurement of the business value of multiple software versions running simultaneously in production, coupled with team-level authority to make end-of-life decisions on specific versions What best describes the process of determining and implementing change to software systems? Future changes driven by immediate issues, faults, and customer requirements with no consideration of the market Changes aligned to a product roadmap driven by a common understanding of the potential market Introduction of experimental changes designed to test market response An evolutionary product roadmap defined around a researched view of market needs Bold introductions of new features, tackling established use cases in innovative new ways How do you manage work in progress? New work items tackled somewhat reactively, with significant new work requiring project managers to negotiate and secure necessary resources and people Individual teams operate their own prioritized product backlogs, task and issue tracking. Engineers are frequently assigned to production support issues. Multiple systems used to track software and production issues. Application support roles incorporated within development teams and working from a shared backlog and issue register. Planned development iterations frequently deliver fewer than expected changes, typically due to priority production issues taking focus. Product owners have tight control over backlogs and have an effective prioritization and estimation mechanism that directly aligns the business. All stakeholders accept that work in progress needs to be actively restricted to achieve quality. Features are now prioritized toward meeting strategic goals. What is your organization’s approach to technical debt (i.e., the implied cost of additional rework caused by choosing an easy, limited solution now instead of using a better approach that would take longer)? High technical debt, with lack of urgency or very little progress of reducing it. Technical debt recognized and is occasionally included with work plans. Technical debt being actively managed. Low technical debt but does block fully automated continuous delivery. Anything that interferes with agility is regarded as technical debt and actively removed as a matter of priority. Do you report any key performance indicators on software development and delivery? Reports on software delivery performance are not habitually created, but instead generated through occasional auditing processes. Some common metrics covering aspects of the software delivery process and runtime software behavior are generally reported within teams, but not necessarily reported across teams as a matter of course. Common metrics are aggregated and used to report specific performance of application lifecycle stages. Software delivery performance is consistently, automatically measured across all applications teams, with full visibility to all stakeholders. There is continuous tracking of software delivery performance measures factored into future planning, supporting continuous improvement. Modernizing applications Page 2/3: DevelopmentDo multiple teams within your organization create and share development standards in a consistent, standardized manner? It’s difficult to find information or necessary contacts within our organization. Knowledge sharing within teams is good, but there’s low visibility into other areas, with a tendency to reinvent solutions to common problems. Although limited, there’s a general willingness to cooperate across silos. There is an emergence of collaboration, meritocracy, transparency, and open exchange of ideas. Our organization actively cultivates and encourages open exchange with external parties. Which best describes how your organization begins building new applications? No common approach; we typically develop new approaches across teams, or for new apps, sometimes use existing work as a basis using a copy and paste method. There is a commonly shared understanding of software development stages, with some common components being reused to support specifics. Manual intervention is needed to promote build artifacts across life-cycle stages. There are examples of some application teams introducing significant amounts of automation within their build and delivery pipelines, but these examples are not widespread and have low adoption. There is a centrally defined process with supporting services and tools supporting high levels of build and delivery automation across development, testing, and production that is commonly adopted and adhered to within the development community. There are few opportunities for teams to customize the process, or introduce different technologies (for example, different programming languages) without breaking away from the standard mechanisms. There is a modular and extensible, reusable, and highly automated, standard approach for software builds and delivery that allows for specific application customization and targeted improvements. Which best describes your application development architecture? Ad hoc choice of application platforms and tooling, limited understanding of contemporary architectural approaches. Selected vendor technology roadmap, initial understanding of new architectures and designs. Iterative development of existing applications, limited legacy strategy, and beginnings of new development architectures. Focus on new application platforms and limited legacy platforms, well-defined architecture for new development projects and operating models. Holistic and defined overall development strategy, good designs and architectures in place and under regular review. What level of maturity is your source code management? Changes are not controlled by a version control system. Version controlled change management requires significant effort to merge for new release candidates; merges are difficult and require specialized and experienced team members to perform. Code, configuration, and associated build and packaging instructions are version controlled. Change merges are more straightforward, but may require high levels of coordination with external work streams that on occasion delay releases. Code and configuration merges for upcoming releases are compact and incremental. Changes can be performed by all appropriate team members without being blocked by other work streams. How would you describe your organization's approach to software testing? All tests are performed manually. Some testing is automated; for example, partial unit tests coverage. Automation of different kinds of testing is emerging; for example: unit, integration, regression, security, availability, and performance. Quality gateways and fast feedback loops exist. Defects in production are replicated via repeatable tests and added to the pipeline. Modernizing applications Page 3/3: OperationsHow would you describe the way you work with your peers and other technology specialists? Tasks are generally worked on exclusively by individuals, and collaboration is generally low or between team members fulfilling very similar roles. Siloed functions are still evident, such as development, storage, networking, virtualization, security, and compliance. There are some examples of cross-functional collaboration within teams. Knowledge is habitually shared across technology and specialties within teams. There are self-directed and empowered cross-functional teams. How is software prepared for distribution and deployment?(select all that apply) Distributable packages created manually on users' personal devices Automated build process from version controlled changes but may be reliant on special environmental configuration and libraries Independent build process that can be run anywhere with access to declared dependencies and resources Packages built once, and once only, with a commonly understood versioning schema and with controlled access to third-party build artifacts and dependencies Packages include consistent trackable metadata that can be used to verify configuration state of a particular runtime environment Which best describes your organization's approach to software deployment? Manual configuration and/or deployment of applications and services Some automation, but promoting between delivery stages (dev to test, test to prod, for example) done manually Some examples of automated deployment processes, but created on a case-by-case basis with little reuse between applications Standardized tooling used to promote applications to production Software releases independent of deployment into production, allowing canary or blue/green releases How do you monitor and track operational performance and potential issues of applications?(select all that apply) Ad hoc logging and manual monitoring of solution resources Standardizing logging and capturing common operational metrics Reacting to issues triggered by predefined alerting thresholds Core capabilities of the solution able to adapt and recover from failure Proactively testing in production including verifying potential failure states Automating IT Page 1/3: EnvironmentWhen building new I.T. processes, is automation a before or after thought? Before During After How are technology decisions made in your organization? Technology decisions are always solely within a department. Technology decisions are sometimes made within a department and sometimes made across departments. Technology decisions are solely made across departments. What best describes your current approach to automating tasks? Processes are changed in siloes. DevOps team changes processes. Cross-organizational teams change processes. Multiple teams automate change processes for their components and contribute to a central repository. Which of these best describes your security practices? No automation used; security requirements are applied manually. Scheduling the change window and taking things offline are implemented manually; scripts are deployed to secure systems. Some automation is used to secure some things (e.g., OS hardening/patching). Validation is done with automation; security processes are defined in a workflow. All security items are applied using automation; complete with end-to-end workflow of security functions including validation. Do multiple teams within your organization create and share automation content in a consistent, standardized manner? No Yes How much automation has your organization deployed? No automation is used. Teams of people write ad hoc scripts for automation. We’ve standardized on an automation platform, and we’re beginning to automate processes. We are able to automate all of our Day 2 tasks. All applications are deployed, and monitoring alerts and other IT day-to-day tasks are automated. Automating IT Page 2/3: DevelopmentOverall, how would you describe the level of IT automation for application development and deployment in your environment? Level 1: Many manual steps throughout the software development life cycle (SDLC) including testing, patching, configuration, release, and deployment Level 2: Overall processes manually coordinated, but with the emergence of scripting for simplifying repeated steps Level 3: Automation coordinating the processes, with some automated testing, and the emergence of declarative configuration implementation Level 4: Significant reuse of delivery automation and automated testing now including operational concerns, but release frequency somewhat impeded by complexity of application code and configuration Level 5: Software deployed to production before a managed release, with production issues being resolved through a test-driven approach; self-service infrastructure with zero or little manual intervention required How automated is your application deployment rolling out new applications? Manual deployment, no process or automation Minimal deployment with ad-hoc scripting, not repeatable Baseline continuous integration (CI) processing (unit tests, manual testing) Advanced CI, greater than 90% automated testing, pipelines, approval gateways Full continuous integration / continuous delivery (CI/CD) from development into production (greater than 90%) How automated is your application deployment rolling out legacy application updates? Manual deployment, no process or automation Minimal deployment with ad-hoc scripting, not repeatable Baseline continuous integration (CI) processing (unit tests, manual testing) Advanced CI, greater than 90% automated testing, pipelines, approval gateways Full continuous integration / continuous delivery (CI/CD) from development into production (greater than 90%) Automating IT Page 3/3: OperationsWhich best describes your current automation operations process? Core build for operating system (OS), only basic (manual) provisioning Patch and release management (OS) Automated quality assurance (QA) staging process (standard operating environment, SOE) Automated OS builds Automatically managed and provisioned infrastructure through self-service Which best describes your current process for creating and deploying IT automation content (scripts, code, configuration files, etc.)? Manual deployment, no process or automation, or unknown Minimal deployment with ad-hoc scripting, not repeatable Baseline continuous integration (CI) processing (unit tests, manual testing) Advanced CI, greater than 90% automated testing, pipelines, approval gateways Full continuous integration / continuous delivery (CI/CD) from development into production (greater than 90%) Which best describes how automation is used to remediate your technology outages? All of the mediation is done manually, or unknown. Automation is used for alerts. Automation is used for alerts and is able to fetch logs pertaining to the issue. Automation is used for alerts and is able to fetch logs pertaining to the issue, and is able to offer remediating steps. Automation is used for alerts and is able to fetch logs pertaining to the issue, is able to offer remediating steps, and able to auto-remediate the issue. Transformation Page 1/3: EnvironmentWhich best describes your leadership’s current state of transformation? Autocratic leadership: Top-down decisions, strong silos, information hiding, blameful culture Getting started: Understanding constraints, systems thinking, invite inquiry, encourage sharing Early open leadership: Sponsored feedback, encourage direct action, bridging silos, aligned goals/metrics Intermediate open leadership: Visibility, shorten feedback, decentralized decisions, celebrate learning Advanced open leadership: empowered individuals, unified purpose, shared risk, psychological safety Which best describes your current product management process? Project management: Waterfall, all requirements defined at start Getting started: Ideas are defined as epics, features are incomplete and prioritization is an afterthought, customer or dev collaboration rare Early product: Hypothesis statements with measurable outcome, collaborates with business and DevOps experts, features have a definition of ready and done and agreed with the team Intermediate product: Measurable outcomes include an MVP; features that are viable in the marketplace, feasible to build, and usable by customers; prioritization Advanced product: Validate the original hypothesis and inform pivot-or-persevere decisions, cost of delay estimated, the program backlog is a collection of minimum marketable features How do you feel about this statement: I understand the vision and strategy of my company and my role in supporting its success? Not aware of any strategy Have heard of a strategy; unsure of my involvement Aware of a strategy; have an idea of how I can be involved Aware of a strategy; understand how my role impacts its success Contributed to the strategy Transformation Page 2/3: DevelopmentWhich statement best describes your organization's DevOps methodology maturity level? Traditional development: Waterfall, siloed QA team, manual testing, separate workstream for bug fixes Getting started: Continuous integration, basic unit tests, distributed version control Early development: Short-lived feature branches, integration environments, feature teams resolve bugs, consistent developer environments, code review process Intermediate development: Continuous delivery, trunk-based development, developers on call, development environments match production, pair programming Advanced development: Feature flags, complete parity between monitoring and testing, blue-green deployments supported Which statement best describes your DevOps development environment? Traditional programming techniques in a heavily segmented structure Sporadic agile adoption with limited cross-team collaboration Multi-team collaboration through formalized communication channels 100% DevOps collaborative culture with energized cross-functional teams and constant improvement Which statement best describes the way your organization creates and shares information or ideas? It is difficult to create or share or to find contacts in the organization who know. Creating or sharing between teams exists, but with low visibility into other areas, and reinvention of solutions for common problems. Creating & sharing is limited, with a willingness to begin collaborating across teams. Cross-team creation & sharing is emerging with a focus on an open exchange, meritocracy, and transparency. Cross-team creation & sharing is actively encouraged and cultivated with a focus on open exchange. Transformation Page 3/3: OperationsWhich statement best describes your current architecture maturity? Legacy architecture: Waterfall, large releases, monolithic, tightly coupled, focused on compliance Getting started: Strangler pattern, on-demand infrastructure, automation to reduce manual and repetitive tasks. Early architecture: Enables DevOps practices, hybrid monolithic / microservices architecture, governance addressed early Intermediate architecture: Release on demand, few dependencies between components, versioned APIs Design patterns of enterprise application architecture: Microservices-based, loosely coupled, continuous delivery, enables experimentation, enables governance Which statement best describes your incident response methodology? No standard incident response process: No on-call rotations, incidents are worked by whomever discovers them, incidents are not tracked, no postmortem/after-incident review process Getting started: Alerting on monitoring systems exists, but no process for managing/running the process of restoring service Traditional incident response: Incident alerts are sent to a central on-call person or a network operations center (NOC) who attempts service restoration according to playbooks, Incident Managers may be a formal position for large-scale incidents; feature teams are not involved in on-call / incident response; incident reviews are focused on stakeholder explanation Intermediate incident response: Response process uses Incident Command System or something similar, developers are part of on-call rotation and incident response for their features/services Advanced incident response: Response process is practiced with regularity, Incident Commanders exist across multiple teams/functions, focus on learning from incidents and a blameless approach, a post-incident review is done for every incident regardless of size Which statement best describes your current operations environment? Traditional ops: Manual tasks are not measured, engagement is via ticketing systems, service-level objectives (SLOs) are not defined Getting started: Some initial SLOs defined, blameless postmortems introduced, incident response process exists Early SRE teams: Release process is documented and automated, including rollbacks and canaries, site reliability engineering (SRE) team charter has been defined, planning and execution is done jointly by developers and SRE Intermediate SRE teams: Periodic review of SRE work, along with service-level indicators (SLIs) and SLOs with business leaders, escalation policy tied to SLO violations (error budgets, etc.), manual tasks are measured Advanced: A goal for amount of manual tasks is set and achieved, service alerts are based on SLOs, SRE team can identify major positive impact on the business Get your results Find out how advanced your organization is in their innovation. Complete this form to access your Ready-to-Accelerate results:First Name* Last Name* Work Email* Work Phone*Company Name* Industry/Field*Choose...Aerospace & DefenseAgricultureApparelAssociationsAutomotiveBiotechBusiness ServicesConstructionConsumer Goods & ServicesEducationEnergy & UtilityFinancial ServicesFood & BeverageFurnitureGovernmentHardwareHealthcare & MedicalHome & GardenHospitality & TravelManufacturingMedia & EntertainmentMiningPharmaceuticalsPrinting & PublishingReal EstateRecreationRetail & DistributionSoftware & TechnologyTelecommunicationsTextilesTransportation & LogisticsDepartment*Choose...IT - Applications / DevelopmentIT - Business IntelligenceIT - DatabaseIT - Desktop / Help DeskIT - NetworkIT - OperationsIT - Project ManagementIT - Quality / TestingIT - Risk / Compliance / SecurityIT - Server / StorageIT - TelecomIT - WebCustomer Service / Call CenterExecutive OfficeFinanceHuman ResourcesLegalMarketing CommunicationsResearch & DevelopmentSalesJob Role*Choose...Account ExecutiveAccountingAnalystArchitectAssistantAutomation ArchitectCDOCEOCFOChairmanChief ArchitectChief ScientistCIOCISOCMOCompliance OfficerConsultantCOOCTOData ScientistDatabase AdministratorDeveloperDirectorEngineerExecutiveFinanceGeneral CounselIndustry AnalystIT AuditorLawyerLegal ServicesManagerMediaNetwork AdministratorOwnerPartnerPresidentPressProcurementProduct ManagerProgrammerPurchasingSolicitorStudentSystem AdministratorVice PresidentWebmasterCountry/Region*Choose...United StatesÅland IslandsAfghanistanAlbaniaAlgeriaAmerican SamoaAndorraAngolaAnguillaAntarcticaAntigua and BarbudaArgentinaArmeniaArubaAustraliaAustriaAzerbaijanBahamasBahrainBangladeshBarbadosBelarusBelgiumBelizeBeninBermudaBhutanBoliviaBonaire, Sint Eustatius and SabaBosnia and HerzegovinaBotswanaBouvet IslandBrazilBritish Indian Ocean TerritoryBrunei DarussalamBulgariaBurkina FasoBurundiCambodiaCameroonCanadaCape VerdeCayman IslandsCentral African RepublicChadChileChinaChristmas IslandCocos IslandsColombiaComorosCongo, Democratic Republic of theCongo, Republic of theCook IslandsCosta RicaCroatiaCubaCuraçaoCyprusCzech RepublicCôte d'IvoireDenmarkDjiboutiDominicaDominican RepublicEcuadorEgyptEl SalvadorEquatorial GuineaEritreaEstoniaEswatini (Swaziland)EthiopiaFalkland IslandsFaroe IslandsFijiFinlandFranceFrench GuianaFrench PolynesiaFrench Southern TerritoriesGabonGambiaGeorgiaGermanyGhanaGibraltarGreeceGreenlandGrenadaGuadeloupeGuamGuatemalaGuernseyGuineaGuinea-BissauGuyanaHaitiHeard and McDonald IslandsHoly SeeHondurasHong KongHungaryIcelandIndiaIndonesiaIranIraqIrelandIsle of ManIsraelItalyJamaicaJapanJerseyJordanKazakhstanKenyaKiribatiKuwaitKyrgyzstanLao People's Democratic RepublicLatviaLebanonLesothoLiberiaLibyaLiechtensteinLithuaniaLuxembourgMacauMacedoniaMadagascarMalawiMalaysiaMaldivesMaliMaltaMarshall IslandsMartiniqueMauritaniaMauritiusMayotteMexicoMicronesiaMoldovaMonacoMongoliaMontenegroMontserratMoroccoMozambiqueMyanmarNamibiaNauruNepalNetherlandsNew CaledoniaNew ZealandNicaraguaNigerNigeriaNiueNorfolk IslandNorth KoreaNorthern Mariana IslandsNorwayOmanPakistanPalauPalestine, State ofPanamaPapua New GuineaParaguayPeruPhilippinesPitcairnPolandPortugalPuerto RicoQatarRomaniaRussiaRwandaRéunionSaint BarthélemySaint HelenaSaint Kitts and NevisSaint LuciaSaint MartinSaint Pierre and MiquelonSaint Vincent and the GrenadinesSamoaSan MarinoSao Tome and PrincipeSaudi ArabiaSenegalSerbiaSeychellesSierra LeoneSingaporeSint MaartenSlovakiaSloveniaSolomon IslandsSomaliaSouth AfricaSouth GeorgiaSouth KoreaSouth SudanSpainSri LankaSudanSurinameSvalbard and Jan Mayen IslandsSwedenSwitzerlandSyriaTaiwanTajikistanTanzaniaThailandTimor-LesteTogoTokelauTongaTrinidad and TobagoTunisiaTurkeyTurkmenistanTurks and Caicos IslandsTuvaluUS Minor Outlying IslandsUgandaUkraineUnited Arab EmiratesUnited KingdomUruguayUzbekistanVanuatuVenezuelaVietnamVirgin Islands, BritishVirgin Islands, U.S.Wallis and FutunaWestern SaharaYemenZambiaZimbabweCAPTCHANameThis field is for validation purposes and should be left unchanged.