The Agile + DevOps EAST 2018

The Agile + DevOps is one of the software conferences TechWell Corporation holds. The conference focuses on seeking to accelerate the delivery of reliable and secure software applications. Also on how the practice of Agile & DevOps brings cross-functional stakeholders together, to deliver software with greater speed and agility, while meeting quality and security demands. This conference give you an opportunity to learn from the industry experts how your organization can leverage Agile and DevOps concepts to improve deployment frequency and time to market, to reduce lead time, and to deliver stable new features with a higher possibility for success.

Last November, I had a chance to attend the Agile + DevOps EAST 2018 held in Orlando, Florida, U.S.A.

The topics presented at the conference were:

  • Agile and DevOps Leadership
  • Agile Engineering Practices
  • Agile Testing and Automation
  • Building Agile and DevOps Cultures
  • Continuous Integration
  • Continuous Delivery/Deployment
  • DevSecOps
  • Scaling Agile and DevOps Capabilities
  • Digital Transformation
  • Agile and DevOps Certification Training

The conference came with networking sessions as well. The ones I found interesting are morning run, lean coffee and morning yoga. See the list of all the networking events here.

Of the wonderful sessions, I’d like to share a few I was interested in:

How AI Is Transforming Software Testing

This session was presented by Raj Subramanian, a Developer Evangelist from Testim.io. Currently, Artificial Intelligence (AI) and machine learning (ML) are advancing at a rapid pace. Many famous companies, like Apple, Google and Facebook already have invested in AI technology to solve different problems in the areas of healthcare, autonomous cars, search engines, predictive modeling and so on. So, how about software testing? Will it adapt to this changes and embrace AI?

Here are the stories he uncovered:

Evolution of software testing 

Raj summarized how the testing practice has evolved over the last 4 decades:

  1. In the 1980s, many software companies used the waterfall methodology and manual testing to delivery products.
  2. In the 1990s, software testing had automation tools, but they were unstable, less functionality and also expensive.
  3. Between 2003 and 2010, open source frameworks began popular. Software relative community start to sharing and improve the software testing. At the same time, people wanted more software features delivered faster. So Agile became a big thing in the SDLC.
  4. In the 2010s, people want to find the solution for speeding up the time of writing tests and finding bugs. The Crowdtesting is encouraging other people to give feedback on the application. And also the Cloud testing started, people realize they need more server space and faster processing. They also found the problem of maintenance. How expensive it is to buy hardware and software for maintaining your tests. Now, we have DevOps, Continuous Testing and CI/CD integration.
  5. How about the future of software testing? Raj said that he believe the Future will be about Autonomous Testing using Machine Learning and AI.

You can find more information from Raj’s blog post, How AI is Changing the Future of Software Testing.

What challenges do we face in QA?

Developing and maintaining automation testing isn’t easy, and some of us may not have resources such as time and trained testers for the tasks. According to Raj, there are five categories for challenges we face in a QA phase:

CategoryChallenges
Skill setSkilled testers are expensive.
AuthoringIn UI testing, they need to think about how to handle elements waits, making reusable components, parameterize the tests.
Initialization state
  • The testing environment should be reset after testing finished.
  • Common scenarios are login service and shopping cart.
Maintenance
  • We often found failed test case because of the change of App. A recent surveys states, this can be up to 30% of a tester work, and it’s a big obstacle in adding more test cases.
  • RAJ’s team start to thinking about How to reduce the cost of maintain test cases and increase more stability.
Scalability
  • RAJ’s team have 500 tests and take 5 hours to run. If those test failed or found a bug, it’s hard to release more than once a day.
  • If they want to release faster, those tests need to write faster and failure find quickly.
  • Thinking about whether the automation tests/framework are extensible, need more server space, run test in parallel and run the test more often.

How can we use AI to solve testing problems?

Testim uses dynamic locators, an Artificial Intelligence (AI) platform, that analyze all the DOM objects of a page and extracts the objects and its properties in real time. Afterwards, the AI decides the best location strategy to locate a particular element based on this analysis. Even if a developer changes an attribute of an element, the test still continues to run and this leads to an increase in the stability of tests. As a result of the authoring, automated tests run much faster and are more stable.

Shift Left: Continuous Performance Testing in the CI/CD Pipeline

Gajan Pathmanathan, a leader of the Enterprise Quality Architecture at Quicken Loans presented this session. Let me share parts of what he shared at the session:

What is continuous performance testing?

According to Wikipedia, performance testing is a process of evaluating the quality or capability of the product. In general, software engineers want to investigate, measure, validate or verify quality attributes of a system, such as scalability, reliability and resource usages. Continuous performance testing is a process of executing automated performance tests as a part of delivery pipelines (CI/CD). There are two types of continuous performance testing; performance smoke/regression testing and end-to-end performance testing.

Boehm’s curve

What does Boehm’s curve tell us? The principle message behind the Boehm curve is that bugs are always more expensive to fix later on in the process. There is a diagram on Reqtest’s blog illustrating that the same bugs would be much cheaper to fix earlier on in the curve. And Boehm’s curve essentially states that we should go ahead and fix as many bugs as possible during the requirement analysis phase and testing phase of a project because after deployment, these bugs grow massively in expenditure.

How much could software errors be costing your company?

In the report on software errors by tricentis.com, software failures cost the economy US$1.7 trillion in financial losses in 2017. In total, software failures at 314 companies affected 3.6 billion people and caused more than 268 years in downtime.

To learn more data and figures, read How much could software errors be costing your company? 

What is the process of continuous performance testing?

Gajan explained this with an example of his company, QuickenLoans Technology. The images provided below are not from the session slides, but there are similar ones provided by TechWell which you can find from here. The traffic lights in the slides represent the quality verification on pipeline stages. 

  1. When a developer creates a PR to a Git repository, it triggers an automatic run of a unit test to measure the code’s stability.
  2. Peer code review is conducted, which is one of the important parts to verify the coding quality. Other activities include checking the legacy code, coding logic and style and so on.
  3. After approval by code review, the branch will be checked-in to Master/Main branch.

The performance smoke/ regression tests would be triggered in pipeline before deploying the artifact to the testing environment.

  1. After the artifact deployment to the testing environment, the entire automation testing such as integration, UI, regression, security and performance smoke/ regression tests would be executed, to ensure the quality in this artifact.
  2. When an automated testing is finished, QA would run the exploratory tests manually, and then deploy to the beta environment.

  1. In the beta environment, the entire automation would be executed again, especially the end-to-end performance tests and end-to-end functional tests.
  2. If all traffic lights are green, the artifact would be deployed to the production environment.

After our artifact is deployed completely, continuous performance monitoring is conducted for tracking status on the production environment.

Integrating Infrastructure as Code into a Continuous Delivery Pipeline

This session was provided by Adarsh Shah from Contino. He started by  introducing IaC, Infrastructure as Code.

What is Infrastructure as Code (IaC)?

The Infrastructure as Code (IaC) is the approach that takes proven coding techniques used by software systems and extends them to infrastructure. Without IaC, we would have challenges with configuration issues, repeatability, human error and time to complete. 

He went onto describing the concept of continuous delivery.

What is continuous delivery?

Continuous Delivery (CD) is the ability to get changes of all types — including new features, configuration changes, bug fixes and experiments — into production, or into the hands of users, safely and quickly in a sustainable way.

Considerations and best practices

He has shared the things to consider and best practices:

Source control

Everything is in source code control; if source code is controlled making code changes would be easy and collaboration within team members would be enhanced. Adarsh suggest to use code/test as document, due to frequent changes in code.

Infra as Code testing

In the lower levels of the test pyramid, feedback can be obtained faster than in higher levels. In a higher level, like smoke with dummy app, we’d gain more confidence from our users, but it also has more cost for maintenance.

Security patterns

For security patterns, Adarsh suggested to automate the CIS benchmarks and building hardening policies for static scanning. For security, they use the secrets management and artifact signing/verification to make the app secure.

If your company is in the field of finance or healthcare, there are standard compliances such as SOX, PII, HIPPA and PCI. A better choice over paperwork is using Compliance as Code such as Chef InSpec and HashiCorp Sentinel (Policy as Code).

Patterns for provisioning

They provided some patterns for provisioning company infrastructure, including the immutable VMs, containerized service and base images & App pull.

Immutable VM: App image consumed by an infrastructure module.

Containerized services: If you use a docker container, you should do an app test, scan the security and Sign the certification to make sure the quality of images before it deployed to Kubernetes.

Base image & app pull: This is a traditional way used where you didn’t have VMs or containerized service in infrastructure. The anti-pattern is for allowing long-lived servers.

People and process: We need to enable teams to interact with infrastructure, security, compliance, QA teams and all teams to work together to improve the CI/CD processes. Providing feedbacks will become much faster in such a tightly bound organization.

Service Virtualization: How to Test More by Testing Less

Paul Merill from Beaufort Fairmont Automated Testing Services shared about service virtualization in this session. Here is what he shared at the session.

Why is your test so flakey?

In most cases, test are flakey due to a “lack of adequate test data management”. A process of planning, designing, storing, managing the software or the source code of an app for testing is called software test data management. The main objective is to check and test the quality of the software. It separates the test data from production data, optimizes the size of software testing data and creates testing reports. If we lack test data management or get poor data quality, our test cases will be unstable and higher the risks of having bugs in production.

What is service virtualization?

Service virtualization is the simulation of the behavior of software components that are unavailable or restricted during the testing stage of the software development lifecycle. Component simulators, also called virtual assets, will reflect the real software components’ behavior as closely as the tests require.

Besides, Paul also used service virtualization in other situations. For example, when their service wasn’t available or developers wasn’t developing yet, but they needed to get a component response to find bugs early. Another case was for unstable external services causing test cases to fail frequently. Using service virtualization with external services enable automated testing or even expand the scope of automated testing. It’s also helpful in demonstrations, if we need to trigger multiple behaviors in components.

See 4 ways to Boost Your Test Process with Service Virtualization for more information.

When do we need to use service and what’s the benefit in the service virtualization?

We face situations like these at least once in our work:

  • A service isn’t available because developers aren’t done yet, but we still need it for testing.
  • External services or third party libraries are unusable.
  • For demonstrations, external services weren’t ready for connection.
  • Having to simulate service behaviors to make more testing scenarios.

The benefits of service virtualization are:

  • Service virtualization removes constraints in regards to dependency availability in test environments.
  • Data-driven virtual assets allow test teams to easily manage their test data and increase their test coverage.
  • Service virtualization enables automated testings or expanding the scope of automated testing.
  • Testers are not the only ones who may benefit from using service virtualization.

Summary

This Agile + DevOps conference had multiple topics on Agile processes, DevOps, security and automation testing. Many experts and speakers came together to share their project experiences and knowledge in the sessions. Conferences like this is give you a chance to meet people from different background, including software engineers, scrum masters, and managers. It was great to talk about how to improve product quality, deliver product faster in CI/CD pipeline and also seek the best solutions.

In this article, I have shared some technical sessions which impressed me the most. For example, as Adarsh mentioned in the Integrating Infrastructure as Code into a Continuous Delivery Pipeline session, he focused on automation anything, including security check, compliance, and source code control. If we can apply it to our CI/CD pipeline then we would make testing to “shift left”.  So we can find and fix bug issues before we deploy the build to the production environment. In the Service virtualization: How to Test More by Testing Less session, Paul introduced the concept of service virtualization and what benefits we would have if the system implemented service virtualization. It helps simulate component behavior to remove constraints and dependency in the test environment so the scope of automated testing can be extended to increase test coverage.

Attending this event was a great experience for me. I learned a lot about how to do the ‘Shift Left’ testing in a software development cycle, how to expand automation coverage and also the AI testing in the future and many others. Hope you could have a chance to attend one in the future.

References

Related Post