Linaro Test and
Validation Summit
Linaro Engineering Teams
LCE13 - Dublin, July 2013
How Do We Better Test our Engineering
2nd Half
PART 1: Linaro Platform
Overview
LAVA
Citius,Altius,Fortius
"Faster,Higher,Stronger"
Builds&CI
BuildYourCodeWhenYou
areNotLooking
QAServices
Coverallbases
PART 2: Linaro Engineering
KernelDevelopers
&Maintainers
LandingTeams LinaroGroups
LEG/LNG
(DotheyuseCI? Manual orAutomatedTesting?)*Howdotheyvalidate/verifytheiroutput?
*What/howdotheydevelop?
&easiertouse
Linaro Test and
Validation Summit
Mike Holmes
LCE13 - Dublin, July 2013
LNG Engineering
● Create:
Two Linux kernels (with and without RT) and
a yocto filesystem.
● Benchmark:
RT + KVM + HugePage + Dataplane APIs
Required to test kernel and userspace
performance, some tests may be run in both
spaces.
● Platforms:
Arndale, AM335x Starter Kit ? (LSI & TI
boards in future ?) QEMU - versatile Express
?
LNG outputs
● Our code is validated using CI, and
performance trends monitored.
● Our output is verified on one general
purpose ARM platform and against two SoC
vendor platforms via a configurable switch to
allow for dedicated links between nodes
under test.
● Using open source software, we use one
realistic network application, a general
purpose benchmark and five feature specific
test suites.
LNG outputs are verified by
Automated testing is done using
● custom scripts run via jenkins & lava executing
○ (RT) LTP (Real Time Test Tree)
○ (RT) Cyclictest
○ (RT) Hackbench
○ (KVM) virt-test
○ (Hugepage) sysbench OLTP
○ (KVM, Hugepage, RT) openvswitch (Kernel and
userspace)
○ (KVM, Hugepage, RT) netperf
○ Traffic test cases via pcap files and tcpreplay
LNG uses these tools
● We test against three branches
○ linux-lng-tip (development)
○ linux-lng-lsk (bug fixes to stable)
○ linux-lng-lsk-RT (bug fixes to stable RT variant)
● LNG specific CFG fragments
○ KVM (or will this be in a lsk kernel per default?)
○ PREEMPT_RT
○ NO_HZ_FULL (or will this be in a lsk kernel per
default?)
○ HUGEPAGE (is that a CFG option?)
LNG Kernel branches / configuration
● Some of the SoC vendors hardware has up to 16 x
10Gb links, generating this much traffic is non trivial.
● Tests equipment such as IXIA traffic generators are
expensive.
● Test equipment needs to be remotely switched between
the different hardware under test in an automated way
● Scheduling test runs that take days and requires
specific equipment to be dedicated to the task.
LNG unique challenges
● Multiple nodes may be needed to test traffic
interoperability.
● It is not feasible to replicate the test environment at
every developer's desk.
● the applied RT patch even when disabled, alters the
execution paths
● Some test run for 24 hours or more
LNG unique challenges
Questions
○ LAVA is(isn't) working for us
■ Interactive shells in the LAVA environment would
speed debugging given that testing can only be
performed with the test equipment in the lab
■ Multinode testing, with the reservation and
configuration of network switches is required.
■ Long term trends in performance data need to
analysed and compared for regression analysis,
triggering alerts for deviations.
○ Further thoughts on Friday
○ https://lce-13.zerista.
com/event/member/79674
LNG Q&A
Linaro Test and
Validation Summit
Scott Bambrough
LCE13 - Dublin, July 2013
Landing Team Engineering
● Bootloaders
● Linux kernels based on mainline or current
RC's
● Linux kernels based on LSK (expected)
● Ubuntu member builds
● Android member builds
● ALIP member build
Some outputs are public, others confidential.
LT Outputs
● Kernel code is validated using CI in the
Linaro LAVA Lab, on various member
hardware devices and ARM fast models.
● Our kernel code is also validated in member
LAVA labs on both current and next gen
hardware.
● Our builds at present are a sanity tested by
the LT's but most testing is done by
piggybacking on QA or automated testing
set up by the platform team.
Verification of LT outputs
● Currently run only basic compile boot test + default CI
tests (LTP, powermgmt)
● This needs to change, we want/need to do more
● We need more SoC level tests, having LT's aware of
how to produce tests to run in LAVA will become more
important
LT and kernel tests
1. Much better LAVA documentation
2. Document the tests themselves
3. Infrastructure for testing
4. Infrastructure for better analysis of results
LT & Member Services Needs
● Deployment Guide
○ what are the hardware requirements for a LAB
○ what are the infrastructure requirements for a LAB
○ hardware setup, software installation instructions
● Administrator's Guide
○ basically how Dave Piggot does his job
○ after initial setup, day to day ops and maintenance
Better Documentation
● Test Developer's Guide
○ how to integrate tests to be run in lava-test-shell
(lava glue)
○ recommendations on how best to write tests for lava-
test-shell
● User's Guide for lava-test-shell
○ for developers to use lava-test-shell
○ section devoted to using lava-test-shell in workflow
of kernel developer?
Better Documentation
● Impossible to answer the question: What tests are
available in LAVA?
● http://lava-test.readthedocs.org/en/latest/index.html
○ not sufficient, not up to date
○ problem isn't LAVA team, Linaro needs an
acceptance policy on what a test has available
before being used in LAVA
● would like to see meta-data in test documentation that
can be used in test reports
○ in a format that can be used in report generation
Document the tests
● Buddy systems
○ TI LT developed tests that require access to
reference material for comparison
■ video frame captures
■ audio filed
○ TI LT audio/video tests required external box to
capture hdmi/audio output
○ Need to do more of this type of automated testing to
verify that lower level functions work correctly at
BSP level
○ GStreamer insanity test suite requires access to
multimedia content
Infrastructure for Testing
● Web dashboard won't cut it
● need to separate analysis from display
○ rather do an analysis, then decide how to display
● why infrastructure?
○ think there should be a level of reuse for
components used to do analysis
○ think these should be separate from LAVA
○ think of this a more of a data mining operation
Infrastructure for Analysis
example:
● generate test report as PDF
○ perform tests, generate a report
○ include metadata regarding tests
■ metadata from test documentation?
example:
● test report comparing:
○ current member BSP kernel
○ current LT kernel based on mainline
● evidence of quality/stability of LT/mainline kernel
● could be used to convince product teams
Infrastructure for Analysis
example:
● regression analysis of kernel changes
○ perform tests one day, make changes, test next
○ did any test results change?
■ yes, send report of changes via email
example:
● generate test report as PDF
○ perform tests, generate a report
○ include metadata regarding tests
■ metadata from test documentation?
Infrastructure for Analysis
example:
● test report comparing:
○ current member BSP kernel
○ current LT kernel based on mainline
● evidence of quality/stability of LT/mainline kernel
● could be used to convince product teams
Infrastructure for Analysis
Linaro Test and
Validation Summit
Kevin Hilman
LCE13 - Dublin, July 2013
Kernel Developer/Maintainer
Most kernel development is done with little or
no automation
● build: local, custom build scripts
● boot: manual boot testing on local hardware
● debug: custom unit-test scripts, manual verification of
results
● publish: to public mailing lists
● merged: into maintainer trees, linux-next
● test: manual test of maintainer trees, linux-next
○ but many (most?) developers don't do this
Current workflow: development
● Code review on mailing list
● build/boot testing by maintainers
● build testing in linux-next (manual)
○ several developers do manual build tests of their pet
platforms in linux-next and report failures
● Intel's 0-day tester (automated, but closed)
○ regular, automatic build tests
○ multi-arch build tests
○ boot tests (x86)
○ automatic git bisect for failures
○ very fast results
○ detailed email reports
○ extremely useful
Current workflow: validation
This model is "good enough" for most
developers and maintainers, so...
Why should we use Jenkins/LAVA?
Linaro test/validation will have to be
● at least as easy to use (locally and remotely)
● output/results more useful
● faster
○ build time
○ diagnostic time
Current workflow: "good enough"
● Local testing: aid in build, boot, test cycle
○ local LAVA install, using local boards
○ reduce duplication of custom scripts/setup
○ encourage writing LAVA-ready tests
○ easy to switch between local, and remote LAVA lab
● Remote CI: broader coverage
○ "I'm about ready to push this, I wonder if broke any
other platforms..."
○ automatic, fast (ish) response
Potential Usage models
● Has to be easy to install
○ packaged (deb, rpm)
○ or git repo for development (bzr is ......)
● Has to fit into existing developer work flow
○ LAVA does not exclusively own hardware
○ developers have non-Linaro platforms
○ command-line driven
○ must co-exist with existing interactive use of boards
■ existing Apache setup
■ existing TFTP setup
■ existing, customized bootloaders
■ ...
Local testing: LAVA
● Broad testing
● multi-arch (not just ARM)
● ARM: all defconfigs (not just Linaro boards)
○ also: allnoconfig, allmodconfig, randconfig, ...
● Continuous builds
○ Linus' tree, linux-next, arm-soc/for-next, ...
○ developers can submit their own branches
● On-demand builds
○ register a tree/branch
○ push triggers a build
● fast, automatic reporting of failures
○ without manual monitoring/clicking through jenkins
Remote CI
Tracking build breakage in upstream trees
● when did build start breaking
● what are the exact build error messages
(without Jenkins click fest)
● which commit (probably) broke the build
○ automated bisect
Useful output: build testing
Where is the line between Jenkins and LAVA?
● Jenkins == build, LAVA == test?
● when a LAVA test fails how do I know...
○ was this a new/updated test?
○ was this a new/updated kernel?
○ if so, can I get to the Jenkins build?
In less than 10 clicks?
Issues: Big picture
● "Master image" is not useful
○ LAVA assumes are powered on and running master
image (or will reboot into master image)
○ assumptions about SD card existence, partitioning...
○ assumptions about shell prompts
linaro-test [rc=0] #
○ etc. etc.
● Goal: LAVA directly controls bootloader
○ netboot: get kernel + DTB + initrd via TFTP
○ extension via board-specific bootloader scripting
Tyler's new "bootloader" device support in LAVA
appears to have mostly solved this !!
Issues: LAVA design
● Terminology learning curve
○ dispatcher, scheduler, dashboard
○ device, device-type
○ What is a bundle?
○ WTF is a bundle stream?
○ Documentation... not helpful (enough said)
● Navigation
○ click intensive
○ how to get from a log to the test results? or...
○ from a test back to the boot log?
○ what about build log (Jenkins?)
○ can I navigate from Jenkins log to the LAVA test?
Issues: LAVA usability
Kernel + modules: omap2plus_defconfig
● 1 minute
○ hackbox.linaro.org (-j48: 12 x 3.5GHz Xeon, 24G)
● 1.5 minutes
○ khilman local (-j24: 6 x 3.3GHz i7, 16G RAM)
● 8 minutes
○ Macbook Air (-j8: 2 x 1.8GHz i7, 4G)
● 14 minutes
○ Thinkpad T61 (-j4: 2 x Core2Duo, 4G RAM)
● 16 minutes
○ Linaro Jenkins (-j8: EC2 node, built in tmpfs)
● 17 minutes
○ ARM chromebook (-j4: 2 x 1.7 GHz A15, 2G RAM)
Issues: Jenkins performance
Linaro Test and
Validation Summit
Grant Likely
LCE13 - Dublin, July 2013
LEG Engineering
LCE13: Test and Validation Summit: The future of testing at Linaro

More Related Content

PDF
LAS16-305: Smart City Big Data Visualization on 96Boards
PDF
Las16 309 - lua jit arm64 port - status
PDF
LAS16-108: JerryScript and other scripting languages for IoT
PDF
BKK16-106 ODP Project Update
PDF
LAS16-210: Hardware Assisted Tracing on ARM with CoreSight and OpenCSD
PDF
LAS16-400: Mini Conference 3 AOSP (Session 1)
PDF
BKK16-409 VOSY Switch Port to ARMv8 Platforms and ODP Integration
PDF
BKK16-302: Android Optimizing Compiler: New Member Assimilation Guide
LAS16-305: Smart City Big Data Visualization on 96Boards
Las16 309 - lua jit arm64 port - status
LAS16-108: JerryScript and other scripting languages for IoT
BKK16-106 ODP Project Update
LAS16-210: Hardware Assisted Tracing on ARM with CoreSight and OpenCSD
LAS16-400: Mini Conference 3 AOSP (Session 1)
BKK16-409 VOSY Switch Port to ARMv8 Platforms and ODP Integration
BKK16-302: Android Optimizing Compiler: New Member Assimilation Guide

What's hot (20)

PDF
LAS16-200: SCMI - System Management and Control Interface
PPTX
Debugging Numerical Simulations on Accelerated Architectures - TotalView fo...
PDF
BKK16-207 VLANd in LAVA
PPTX
LAS16-106: GNU Toolchain Development Lifecycle
PDF
LCU14 310- Cisco ODP v2
PDF
SFO15-102:ODP Project Update
PDF
LAS16-TR02: Upstreaming 101
PDF
Learn more about the tremendous value Open Data Plane brings to NFV
PDF
BUD17-104: Scripting Languages in IoT: Challenges and Approaches
PDF
LAS16-500: The Rise and Fall of Assembler and the VGIC from Hell
PDF
LCA14: LCA14-209: ODP Project Update
PDF
LAS16-109: LAS16-109: The status quo and the future of 96Boards
PDF
BKK16-213 Where's the Hardware?
PDF
HKG15-110: ODP Project Update
PDF
LAS16-TR06: Remoteproc & rpmsg development
PDF
Foss Gadgematics
PPTX
Tech Days 2015: ARM Programming with GNAT and Ada 2012
PDF
ODP Presentation LinuxCon NA 2014
PPTX
Tech Day 2015: A Gentle Introduction to GPS and GNATbench
PPTX
Tech Days 2015: Embedded Product Update
LAS16-200: SCMI - System Management and Control Interface
Debugging Numerical Simulations on Accelerated Architectures - TotalView fo...
BKK16-207 VLANd in LAVA
LAS16-106: GNU Toolchain Development Lifecycle
LCU14 310- Cisco ODP v2
SFO15-102:ODP Project Update
LAS16-TR02: Upstreaming 101
Learn more about the tremendous value Open Data Plane brings to NFV
BUD17-104: Scripting Languages in IoT: Challenges and Approaches
LAS16-500: The Rise and Fall of Assembler and the VGIC from Hell
LCA14: LCA14-209: ODP Project Update
LAS16-109: LAS16-109: The status quo and the future of 96Boards
BKK16-213 Where's the Hardware?
HKG15-110: ODP Project Update
LAS16-TR06: Remoteproc & rpmsg development
Foss Gadgematics
Tech Days 2015: ARM Programming with GNAT and Ada 2012
ODP Presentation LinuxCon NA 2014
Tech Day 2015: A Gentle Introduction to GPS and GNATbench
Tech Days 2015: Embedded Product Update
Ad

Viewers also liked (6)

PDF
LCA13: ARMv8 Status and Updates
PDF
ARM 64bit has come!
PDF
BKK16-212: What's broken on ARM64?
PDF
SFO15-205: OP-TEE Content Decryption with Microsoft PlayReady on ARM
PDF
SFO15-502: Using generic cpuidle framework for ARM/ARM64 in your driver
PPTX
Linux Kernel Booting Process (1) - For NLKB
LCA13: ARMv8 Status and Updates
ARM 64bit has come!
BKK16-212: What's broken on ARM64?
SFO15-205: OP-TEE Content Decryption with Microsoft PlayReady on ARM
SFO15-502: Using generic cpuidle framework for ARM/ARM64 in your driver
Linux Kernel Booting Process (1) - For NLKB
Ad

Similar to LCE13: Test and Validation Summit: The future of testing at Linaro (20)

PDF
LCE13: Test and Validation Summit: Evolution of Testing in Linaro (I)
PDF
LCE13: Test and Validation Summit: Evolution of Testing in Linaro (II)
PDF
SFO15-110: Toolchain Collaboration
PDF
LAS16-209: Finished and Upcoming Projects in LMG
PDF
LCU14 303- Toolchain Collaboration
PDF
BKK16-210 Migrating to the new dispatcher
PDF
Devops with Python by Yaniv Cohen DevopShift
PDF
Building a Small Datacenter
PPTX
Cognos Performance Tuning Tips & Tricks
PDF
Building a Small DC
PDF
DevOpsDays Taipei 2019 - Mastering IaC the DevOps Way
PDF
Expedia 3x3 presentation
PDF
HKG18-TR12 - LAVA for LITE Platforms and Tests
PDF
Dev.bg DevOps March 2024 Monitoring & Logging
PDF
LCA13: LAVA and CI Component Review
PDF
ELC-E 2016 Neil Armstrong - No, it's never too late to upstream your legacy l...
PDF
20141111_SOS3_Gallo
PDF
LMG Lightning Talks - SFO17-205
PDF
Tips & Tricks for Apache Kafka®
PDF
BUD17-405: Building a reference IoT product with Zephyr
LCE13: Test and Validation Summit: Evolution of Testing in Linaro (I)
LCE13: Test and Validation Summit: Evolution of Testing in Linaro (II)
SFO15-110: Toolchain Collaboration
LAS16-209: Finished and Upcoming Projects in LMG
LCU14 303- Toolchain Collaboration
BKK16-210 Migrating to the new dispatcher
Devops with Python by Yaniv Cohen DevopShift
Building a Small Datacenter
Cognos Performance Tuning Tips & Tricks
Building a Small DC
DevOpsDays Taipei 2019 - Mastering IaC the DevOps Way
Expedia 3x3 presentation
HKG18-TR12 - LAVA for LITE Platforms and Tests
Dev.bg DevOps March 2024 Monitoring & Logging
LCA13: LAVA and CI Component Review
ELC-E 2016 Neil Armstrong - No, it's never too late to upstream your legacy l...
20141111_SOS3_Gallo
LMG Lightning Talks - SFO17-205
Tips & Tricks for Apache Kafka®
BUD17-405: Building a reference IoT product with Zephyr

More from Linaro (20)

PDF
Deep Learning Neural Network Acceleration at the Edge - Andrea Gallo
PDF
Arm Architecture HPC Workshop Santa Clara 2018 - Kanta Vekaria
PDF
Huawei’s requirements for the ARM based HPC solution readiness - Joshua Mora
PDF
Bud17 113: distribution ci using qemu and open qa
PDF
OpenHPC Automation with Ansible - Renato Golin - Linaro Arm HPC Workshop 2018
PDF
HPC network stack on ARM - Linaro HPC Workshop 2018
PDF
It just keeps getting better - SUSE enablement for Arm - Linaro HPC Workshop ...
PDF
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...
PDF
Yutaka Ishikawa - Post-K and Arm HPC Ecosystem - Linaro Arm HPC Workshop Sant...
PDF
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...
PDF
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
PDF
HKG18-100K1 - George Grey: Opening Keynote
PDF
HKG18-318 - OpenAMP Workshop
PDF
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
PDF
HKG18-315 - Why the ecosystem is a wonderful thing, warts and all
PDF
HKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
PDF
HKG18-TR08 - Upstreaming SVE in QEMU
PDF
HKG18-113- Secure Data Path work with i.MX8M
PPTX
HKG18-120 - Devicetree Schema Documentation and Validation
PPTX
HKG18-223 - Trusted FirmwareM: Trusted boot
Deep Learning Neural Network Acceleration at the Edge - Andrea Gallo
Arm Architecture HPC Workshop Santa Clara 2018 - Kanta Vekaria
Huawei’s requirements for the ARM based HPC solution readiness - Joshua Mora
Bud17 113: distribution ci using qemu and open qa
OpenHPC Automation with Ansible - Renato Golin - Linaro Arm HPC Workshop 2018
HPC network stack on ARM - Linaro HPC Workshop 2018
It just keeps getting better - SUSE enablement for Arm - Linaro HPC Workshop ...
Intelligent Interconnect Architecture to Enable Next Generation HPC - Linaro ...
Yutaka Ishikawa - Post-K and Arm HPC Ecosystem - Linaro Arm HPC Workshop Sant...
Andrew J Younge - Vanguard Astra - Petascale Arm Platform for U.S. DOE/ASC Su...
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-100K1 - George Grey: Opening Keynote
HKG18-318 - OpenAMP Workshop
HKG18-501 - EAS on Common Kernel 4.14 and getting (much) closer to mainline
HKG18-315 - Why the ecosystem is a wonderful thing, warts and all
HKG18- 115 - Partitioning ARM Systems with the Jailhouse Hypervisor
HKG18-TR08 - Upstreaming SVE in QEMU
HKG18-113- Secure Data Path work with i.MX8M
HKG18-120 - Devicetree Schema Documentation and Validation
HKG18-223 - Trusted FirmwareM: Trusted boot

Recently uploaded (20)

PDF
sbt 2.0: go big (Scala Days 2025 edition)
PPTX
The various Industrial Revolutions .pptx
PDF
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
PDF
Consumable AI The What, Why & How for Small Teams.pdf
PDF
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
PDF
Architecture types and enterprise applications.pdf
PPTX
Custom Battery Pack Design Considerations for Performance and Safety
PDF
Abstractive summarization using multilingual text-to-text transfer transforme...
PDF
Credit Without Borders: AI and Financial Inclusion in Bangladesh
PDF
Developing a website for English-speaking practice to English as a foreign la...
PPTX
Modernising the Digital Integration Hub
PDF
How ambidextrous entrepreneurial leaders react to the artificial intelligence...
PDF
Hindi spoken digit analysis for native and non-native speakers
PPT
Galois Field Theory of Risk: A Perspective, Protocol, and Mathematical Backgr...
PDF
Convolutional neural network based encoder-decoder for efficient real-time ob...
PDF
CloudStack 4.21: First Look Webinar slides
PDF
A contest of sentiment analysis: k-nearest neighbor versus neural network
PDF
Hybrid horned lizard optimization algorithm-aquila optimizer for DC motor
PDF
Zenith AI: Advanced Artificial Intelligence
PDF
sustainability-14-14877-v2.pddhzftheheeeee
sbt 2.0: go big (Scala Days 2025 edition)
The various Industrial Revolutions .pptx
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
Consumable AI The What, Why & How for Small Teams.pdf
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
Architecture types and enterprise applications.pdf
Custom Battery Pack Design Considerations for Performance and Safety
Abstractive summarization using multilingual text-to-text transfer transforme...
Credit Without Borders: AI and Financial Inclusion in Bangladesh
Developing a website for English-speaking practice to English as a foreign la...
Modernising the Digital Integration Hub
How ambidextrous entrepreneurial leaders react to the artificial intelligence...
Hindi spoken digit analysis for native and non-native speakers
Galois Field Theory of Risk: A Perspective, Protocol, and Mathematical Backgr...
Convolutional neural network based encoder-decoder for efficient real-time ob...
CloudStack 4.21: First Look Webinar slides
A contest of sentiment analysis: k-nearest neighbor versus neural network
Hybrid horned lizard optimization algorithm-aquila optimizer for DC motor
Zenith AI: Advanced Artificial Intelligence
sustainability-14-14877-v2.pddhzftheheeeee

LCE13: Test and Validation Summit: The future of testing at Linaro

  • 1. Linaro Test and Validation Summit Linaro Engineering Teams LCE13 - Dublin, July 2013 How Do We Better Test our Engineering 2nd Half
  • 2. PART 1: Linaro Platform Overview LAVA Citius,Altius,Fortius "Faster,Higher,Stronger" Builds&CI BuildYourCodeWhenYou areNotLooking QAServices Coverallbases PART 2: Linaro Engineering KernelDevelopers &Maintainers LandingTeams LinaroGroups LEG/LNG (DotheyuseCI? Manual orAutomatedTesting?)*Howdotheyvalidate/verifytheiroutput? *What/howdotheydevelop? &easiertouse
  • 3. Linaro Test and Validation Summit Mike Holmes LCE13 - Dublin, July 2013 LNG Engineering
  • 4. ● Create: Two Linux kernels (with and without RT) and a yocto filesystem. ● Benchmark: RT + KVM + HugePage + Dataplane APIs Required to test kernel and userspace performance, some tests may be run in both spaces. ● Platforms: Arndale, AM335x Starter Kit ? (LSI & TI boards in future ?) QEMU - versatile Express ? LNG outputs
  • 5. ● Our code is validated using CI, and performance trends monitored. ● Our output is verified on one general purpose ARM platform and against two SoC vendor platforms via a configurable switch to allow for dedicated links between nodes under test. ● Using open source software, we use one realistic network application, a general purpose benchmark and five feature specific test suites. LNG outputs are verified by
  • 6. Automated testing is done using ● custom scripts run via jenkins & lava executing ○ (RT) LTP (Real Time Test Tree) ○ (RT) Cyclictest ○ (RT) Hackbench ○ (KVM) virt-test ○ (Hugepage) sysbench OLTP ○ (KVM, Hugepage, RT) openvswitch (Kernel and userspace) ○ (KVM, Hugepage, RT) netperf ○ Traffic test cases via pcap files and tcpreplay LNG uses these tools
  • 7. ● We test against three branches ○ linux-lng-tip (development) ○ linux-lng-lsk (bug fixes to stable) ○ linux-lng-lsk-RT (bug fixes to stable RT variant) ● LNG specific CFG fragments ○ KVM (or will this be in a lsk kernel per default?) ○ PREEMPT_RT ○ NO_HZ_FULL (or will this be in a lsk kernel per default?) ○ HUGEPAGE (is that a CFG option?) LNG Kernel branches / configuration
  • 8. ● Some of the SoC vendors hardware has up to 16 x 10Gb links, generating this much traffic is non trivial. ● Tests equipment such as IXIA traffic generators are expensive. ● Test equipment needs to be remotely switched between the different hardware under test in an automated way ● Scheduling test runs that take days and requires specific equipment to be dedicated to the task. LNG unique challenges
  • 9. ● Multiple nodes may be needed to test traffic interoperability. ● It is not feasible to replicate the test environment at every developer's desk. ● the applied RT patch even when disabled, alters the execution paths ● Some test run for 24 hours or more LNG unique challenges
  • 10. Questions ○ LAVA is(isn't) working for us ■ Interactive shells in the LAVA environment would speed debugging given that testing can only be performed with the test equipment in the lab ■ Multinode testing, with the reservation and configuration of network switches is required. ■ Long term trends in performance data need to analysed and compared for regression analysis, triggering alerts for deviations. ○ Further thoughts on Friday ○ https://lce-13.zerista. com/event/member/79674 LNG Q&A
  • 11. Linaro Test and Validation Summit Scott Bambrough LCE13 - Dublin, July 2013 Landing Team Engineering
  • 12. ● Bootloaders ● Linux kernels based on mainline or current RC's ● Linux kernels based on LSK (expected) ● Ubuntu member builds ● Android member builds ● ALIP member build Some outputs are public, others confidential. LT Outputs
  • 13. ● Kernel code is validated using CI in the Linaro LAVA Lab, on various member hardware devices and ARM fast models. ● Our kernel code is also validated in member LAVA labs on both current and next gen hardware. ● Our builds at present are a sanity tested by the LT's but most testing is done by piggybacking on QA or automated testing set up by the platform team. Verification of LT outputs
  • 14. ● Currently run only basic compile boot test + default CI tests (LTP, powermgmt) ● This needs to change, we want/need to do more ● We need more SoC level tests, having LT's aware of how to produce tests to run in LAVA will become more important LT and kernel tests
  • 15. 1. Much better LAVA documentation 2. Document the tests themselves 3. Infrastructure for testing 4. Infrastructure for better analysis of results LT & Member Services Needs
  • 16. ● Deployment Guide ○ what are the hardware requirements for a LAB ○ what are the infrastructure requirements for a LAB ○ hardware setup, software installation instructions ● Administrator's Guide ○ basically how Dave Piggot does his job ○ after initial setup, day to day ops and maintenance Better Documentation
  • 17. ● Test Developer's Guide ○ how to integrate tests to be run in lava-test-shell (lava glue) ○ recommendations on how best to write tests for lava- test-shell ● User's Guide for lava-test-shell ○ for developers to use lava-test-shell ○ section devoted to using lava-test-shell in workflow of kernel developer? Better Documentation
  • 18. ● Impossible to answer the question: What tests are available in LAVA? ● http://lava-test.readthedocs.org/en/latest/index.html ○ not sufficient, not up to date ○ problem isn't LAVA team, Linaro needs an acceptance policy on what a test has available before being used in LAVA ● would like to see meta-data in test documentation that can be used in test reports ○ in a format that can be used in report generation Document the tests
  • 19. ● Buddy systems ○ TI LT developed tests that require access to reference material for comparison ■ video frame captures ■ audio filed ○ TI LT audio/video tests required external box to capture hdmi/audio output ○ Need to do more of this type of automated testing to verify that lower level functions work correctly at BSP level ○ GStreamer insanity test suite requires access to multimedia content Infrastructure for Testing
  • 20. ● Web dashboard won't cut it ● need to separate analysis from display ○ rather do an analysis, then decide how to display ● why infrastructure? ○ think there should be a level of reuse for components used to do analysis ○ think these should be separate from LAVA ○ think of this a more of a data mining operation Infrastructure for Analysis
  • 21. example: ● generate test report as PDF ○ perform tests, generate a report ○ include metadata regarding tests ■ metadata from test documentation? example: ● test report comparing: ○ current member BSP kernel ○ current LT kernel based on mainline ● evidence of quality/stability of LT/mainline kernel ● could be used to convince product teams Infrastructure for Analysis
  • 22. example: ● regression analysis of kernel changes ○ perform tests one day, make changes, test next ○ did any test results change? ■ yes, send report of changes via email example: ● generate test report as PDF ○ perform tests, generate a report ○ include metadata regarding tests ■ metadata from test documentation? Infrastructure for Analysis
  • 23. example: ● test report comparing: ○ current member BSP kernel ○ current LT kernel based on mainline ● evidence of quality/stability of LT/mainline kernel ● could be used to convince product teams Infrastructure for Analysis
  • 24. Linaro Test and Validation Summit Kevin Hilman LCE13 - Dublin, July 2013 Kernel Developer/Maintainer
  • 25. Most kernel development is done with little or no automation ● build: local, custom build scripts ● boot: manual boot testing on local hardware ● debug: custom unit-test scripts, manual verification of results ● publish: to public mailing lists ● merged: into maintainer trees, linux-next ● test: manual test of maintainer trees, linux-next ○ but many (most?) developers don't do this Current workflow: development
  • 26. ● Code review on mailing list ● build/boot testing by maintainers ● build testing in linux-next (manual) ○ several developers do manual build tests of their pet platforms in linux-next and report failures ● Intel's 0-day tester (automated, but closed) ○ regular, automatic build tests ○ multi-arch build tests ○ boot tests (x86) ○ automatic git bisect for failures ○ very fast results ○ detailed email reports ○ extremely useful Current workflow: validation
  • 27. This model is "good enough" for most developers and maintainers, so... Why should we use Jenkins/LAVA? Linaro test/validation will have to be ● at least as easy to use (locally and remotely) ● output/results more useful ● faster ○ build time ○ diagnostic time Current workflow: "good enough"
  • 28. ● Local testing: aid in build, boot, test cycle ○ local LAVA install, using local boards ○ reduce duplication of custom scripts/setup ○ encourage writing LAVA-ready tests ○ easy to switch between local, and remote LAVA lab ● Remote CI: broader coverage ○ "I'm about ready to push this, I wonder if broke any other platforms..." ○ automatic, fast (ish) response Potential Usage models
  • 29. ● Has to be easy to install ○ packaged (deb, rpm) ○ or git repo for development (bzr is ......) ● Has to fit into existing developer work flow ○ LAVA does not exclusively own hardware ○ developers have non-Linaro platforms ○ command-line driven ○ must co-exist with existing interactive use of boards ■ existing Apache setup ■ existing TFTP setup ■ existing, customized bootloaders ■ ... Local testing: LAVA
  • 30. ● Broad testing ● multi-arch (not just ARM) ● ARM: all defconfigs (not just Linaro boards) ○ also: allnoconfig, allmodconfig, randconfig, ... ● Continuous builds ○ Linus' tree, linux-next, arm-soc/for-next, ... ○ developers can submit their own branches ● On-demand builds ○ register a tree/branch ○ push triggers a build ● fast, automatic reporting of failures ○ without manual monitoring/clicking through jenkins Remote CI
  • 31. Tracking build breakage in upstream trees ● when did build start breaking ● what are the exact build error messages (without Jenkins click fest) ● which commit (probably) broke the build ○ automated bisect Useful output: build testing
  • 32. Where is the line between Jenkins and LAVA? ● Jenkins == build, LAVA == test? ● when a LAVA test fails how do I know... ○ was this a new/updated test? ○ was this a new/updated kernel? ○ if so, can I get to the Jenkins build? In less than 10 clicks? Issues: Big picture
  • 33. ● "Master image" is not useful ○ LAVA assumes are powered on and running master image (or will reboot into master image) ○ assumptions about SD card existence, partitioning... ○ assumptions about shell prompts linaro-test [rc=0] # ○ etc. etc. ● Goal: LAVA directly controls bootloader ○ netboot: get kernel + DTB + initrd via TFTP ○ extension via board-specific bootloader scripting Tyler's new "bootloader" device support in LAVA appears to have mostly solved this !! Issues: LAVA design
  • 34. ● Terminology learning curve ○ dispatcher, scheduler, dashboard ○ device, device-type ○ What is a bundle? ○ WTF is a bundle stream? ○ Documentation... not helpful (enough said) ● Navigation ○ click intensive ○ how to get from a log to the test results? or... ○ from a test back to the boot log? ○ what about build log (Jenkins?) ○ can I navigate from Jenkins log to the LAVA test? Issues: LAVA usability
  • 35. Kernel + modules: omap2plus_defconfig ● 1 minute ○ hackbox.linaro.org (-j48: 12 x 3.5GHz Xeon, 24G) ● 1.5 minutes ○ khilman local (-j24: 6 x 3.3GHz i7, 16G RAM) ● 8 minutes ○ Macbook Air (-j8: 2 x 1.8GHz i7, 4G) ● 14 minutes ○ Thinkpad T61 (-j4: 2 x Core2Duo, 4G RAM) ● 16 minutes ○ Linaro Jenkins (-j8: EC2 node, built in tmpfs) ● 17 minutes ○ ARM chromebook (-j4: 2 x 1.7 GHz A15, 2G RAM) Issues: Jenkins performance
  • 36. Linaro Test and Validation Summit Grant Likely LCE13 - Dublin, July 2013 LEG Engineering