SlideShare a Scribd company logo
16/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG:
A Benchmark of
Five Node.js
Logging Libraries
Presented by:
26/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG:
Introduction to our benchmark
We compared performance and reliability among three popular
debug logging libraries. It also compares performance between
two popular Express request logging libraries.
Measure the time required to log a large number of
messages and compare the reliability of these libraries by
recording message drop rates.
GOAL
loggly.com/node-js-benchmark/
36/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG:
Libraries tested
HTTP request logging with Express
• Express-Winston
• Morgan
The following libraries were tested:
Debug-level logging
• Bunyan
• Log4js
• Winston
loggly.com/node-js-benchmark/
46/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG:
Setup and configuration:
Test structure
DEBUG Logging Request Logging
Syslog (UDP and TCP) 100,000 DEBUG logs over
10 iterations
100,000 requests over
10 iterations, maximum
500 concurrent connections
File transport 10,000 DEBUG logs over
100 iterations
—
For the debug-level logging
tests, a list of prime numbers
was being generated in the
background to simulate a
workload.
For the request logging tests,
an Express app was created
that returned the plain text
“Hello World” when requested.
The benchmark results were
found using ApacheBench.
loggly.com/node-js-benchmark/
56/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG:
Bunyan:
• Fastest of the three
• Ran 26% faster than Log4js,
34% faster than Winston
• Drop rate:
0% for all three libraries
Debug logging test results
File transport
Each library produced similar
results.
Log4jsBunyan Winston
Time(ms)
282
381
426
460
0
100
200
300
400
Drop
Rate 0% Drop
Rate 0% Drop
Rate 0%
loggly.com/node-js-benchmark/
66/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG:
Debug logging test results
Syslog transport: TCP
• Log4js is the fastest library
for the TCP protocol.
• Tradeoff: <1% packet drop
• Bunyan was the fastest
library with a 0% drop rate
• Winston had the highest
average time and the amount
of fluctuation between tests
Log4jsBunyan Winston
Time(ms)
12,210
9,022
11,392
1200
0
200
400
800
1000
600
1400
Drop
Rate 0% Drop
Rate <1% Drop
Rate 0%
loggly.com/node-js-benchmark/
76/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG:
Debug logging test results
UDP
For the UDP tests, the libraries
proved to have striking differences
in both efficiency and reliability
Log4js:
• Fastest library —
smallest drop rate
• 36% faster than Bunyan,
71% faster than Winston
• Drop rate:
16% less than Bunyan,
25% less than Winston Log4jsBunyan Winston
Time(ms)
4,769
3,537
10,537
1200
0
200
400
800
1000
600
Drop
Rate 43% Drop
Rate 27%
Drop
Rate 52%
loggly.com/node-js-benchmark/
86/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG:
A note on drop rates
• Since the system was being pushed to its limits for testing purposes,
the syslog drop rates seen below are greater than you should ever
experience in a real-world application
• Requests sent to a real-world application are normally spread out over
time, which allows the system to catch up
loggly.com/node-js-benchmark/
96/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG:
HTTP request logging test results
Syslog transport: TCP
Morgan processed each request 56% faster than Winston, which suggests that
it would be the ideal request logging library in a real-world application.
Average time required to process each individual requestAverage total request time per iteration of 100,000 requests
Morgan Winston
Time(ms)
47,360
91,628
100,000
0
20,000
40,000
60,000
80,000
Drop
Rate 0% Drop
Rate 0%
Morgan Winston
Time(ms)
108
243
300
0
100
150
200
250
50 Drop
Rate 0%Drop
Rate 0%
loggly.com/node-js-benchmark/
106/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG:
HTTP request logging test results
Syslog transport: UDP
As with TCP, Morgan runs more efficiently timewise compared to Winston.
Average time required to process each individual requestAverage total request time per iteration of 100,000 requests
Drop
Rate <1%Drop
Rate <1%
Morgan Winston
Time(ms)
50,002
62,37564,000
44,000
48,000
52,000
56,000
60,000
Morgan Winston
Time(ms)
129
163
200
0
100
125
150
175
50
75
Drop
Rate <1%Drop
Rate <1%
loggly.com/node-js-benchmark/
116/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG:
Conclusion
• With Node.js logging libraries, tradeoff between efficiency and
reliability is not always significant
• When outputting debug logs to a file, the most efficient library timewise
is Bunyan
• When using the syslog protocol, Log4js is the fastest of the three
libraries for both TCP and UDP.
• May choose to use Bunyan over Log4js for TCP since Bunyan had 0%
drop rate
• For request logging, Morgan always supersedes Winston for efficiency
loggly.com/node-js-benchmark/
126/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG:
Sending Node.js log data to Loggly
• When sending Node.js logs to Loggly, we recommend the winston-
loggly-bulk library
• Loggly focused on Winston due to its popularity and high ratings from
developers
See the documentation here.
loggly.com/node-js-benchmark/
136/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG:
About the author
Lukas Rossi is an experienced front- and back-end web designer
and developer who has developed numerous mobile applications
and websites. Lukas discovered an interest in web development
during high school, and since then he hasn't stopped learning the
latest technologies. He is the manager of App Dimensions
[appdimensions.com], a company that creates mobile apps to
improve everyday life. In his free time, Lukas enjoys bike riding and
various other outdoor activities.
loggly.com/node-js-benchmark/
146/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG:
Appendix
156/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG:
Setup and configuration:
Additional details on test structure
Syslog (UDP and TCP) File
DEBUG Logging 100,000 DEBUG logs over
10 iterations
10,000 DEBUG logs over
100 iterations
Request Logging 100,000 requests over
10 iterations, maximum
500 concurrent connections
—
The following libraries were tested:
All of the libraries were tested and compared using syslog UDP and TCP (rsyslog was used in
the tests), as well as file transports for the debug-level logging tests. Benchmarking with the file
transport contained more iterations (with less runs per iteration) to eliminate overload. For each
specific test, a total of 1 million log events was logged. Each specific test was run three times,
and the results were then averaged. Log data was sent to a local server; therefore, the tests are
not affected by network lag.
For the debug-level logging tests, a list of prime numbers was being generated in the
background to simulate a workload.
For the request logging tests, an Express app was created that returned the plain text “Hello
World” when requested. The benchmark results were found using ApacheBench.
166/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG:
Setup and configuration:
Server hardware
All of the tests were performed on a
server with burstable high
frequency Intel Xeon processors
and 4GiB of RAM. The Node.js
processes, rsyslog, and
ApacheBench were all running on
this server, so there was no
network latency affecting the tests.
All of the libraries were tested and compared
using syslog UDP and TCP (rsyslog was used in
the tests), as well as file transports for the
debug-level logging tests. Benchmarking with
the file transport contained more iterations (with
less runs per iteration) to eliminate overload. For
each specific test, a total of 1 million log events
was logged. Each specific test was run three
times, and the results were then averaged. Log
data was sent to a local server; therefore, the
tests are not affected by network lag.
loggly.com/node-js-benchmark/
176/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG:
Logging format and library configuration
The standard logging format for each specific library was used for the tests. The configuration for each library
was kept as close to default as possible, so keep in mind that customizing the configuration to suit your
specific application may improve each library’s performance.
Many of the libraries used require additional code in order to log with syslog.
The following libraries were added:
“Winston-syslog” (https://github.com/winstonjs/winston-syslog) for debug-level syslog logging with Winston
“Winston-rsyslog2” (https://github.com/imsky/winston-rsyslog2) for request logging with syslog with Winston
“Log4js-tcp” (https://github.com/tianyk/log4js-tcp) for TCP logging with Log4js
“Node-bunyan-syslog” (https://github.com/mcavage/node-bunyan-syslog) for syslog logging with Bunyan
“Node-syslog” (https://github.com/cloudhead/node-syslog) for TCP logging with Morgan
“Node-syslogudp” (https://www.npmjs.com/package/syslogudp) for UDP logging with Morgan
loggly.com/node-js-benchmark/
186/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG:
So you implemented logging. Now what?
Centralize all of your log data and see what matters fast.
Try Loggly for free!
Loggly is the world’s most popular cloud-based, enterprise-class log management service, serving more
than 10,000 customers including one-third of the Fortune 500. The Loggly service integrates into the
engineering processes of teams employing continuous deployment and DevOps practices to reduce MTTR,
improve service quality, accelerate innovation, and make better use of valuable development resources.
loggly.com/node-js-benchmark/

More Related Content

PDF
Loggly - Tools and Techniques For Logging Microservices
PDF
Loggly - How to Scale Your Architecture and DevOps Practices for Big Data App...
PPTX
How Thermo Fisher is Reducing Data Analysis Times from Days to Minutes with M...
PPTX
Getting Started with MongoDB Using the Microsoft Stack
PPTX
Building a Cloud Native Service - Docker Meetup Santa Clara (July 20, 2017)
PPTX
Powering Microservices with MongoDB, Docker, Kubernetes & Kafka – MongoDB Eur...
PPTX
Analyzing NGINX Logs with Datadog
PDF
MongoDB .local London 2019: Modern Data Backup and Recovery from On-premises ...
Loggly - Tools and Techniques For Logging Microservices
Loggly - How to Scale Your Architecture and DevOps Practices for Big Data App...
How Thermo Fisher is Reducing Data Analysis Times from Days to Minutes with M...
Getting Started with MongoDB Using the Microsoft Stack
Building a Cloud Native Service - Docker Meetup Santa Clara (July 20, 2017)
Powering Microservices with MongoDB, Docker, Kubernetes & Kafka – MongoDB Eur...
Analyzing NGINX Logs with Datadog
MongoDB .local London 2019: Modern Data Backup and Recovery from On-premises ...

What's hot (20)

PDF
MongoDB .local London 2019: New Encryption Capabilities in MongoDB 4.2: A Dee...
PPTX
Introducing Stitch
PDF
Using NGINX and NGINX Plus as a Kubernetes Ingress
PPTX
What's new in NGINX Plus R19
PDF
MongoDB .local Bengaluru 2019: The Journey of Migration from Oracle to MongoD...
PPTX
A Free New World: Atlas Free Tier and How It Was Born
PPTX
.NET Fest 2017. Андрей Антиликаторов. Проектирование и разработка приложений ...
PPTX
Developing Business Blockchain Applications on Hyperledger
PDF
MongoDB Launchpad 2016: Moving Cybersecurity to the Cloud
PPTX
.NET Fest 2017. Денис Резник. Исполнение Запроса в SQL Server. Ожидание - Реа...
PDF
Hyperledger Fabric & Composer
ODP
Hyperledger Composer
PDF
Digital Forensics and Incident Response in The Cloud Part 3
PPTX
Building the Real-Time Performance Panel
PDF
Role Based Access Controls (RBAC) for SSH and Kubernetes Access with Teleport
PDF
Opentracing jaeger
PPTX
Meetup Microservices Commandments
PPTX
Sizing Your MongoDB Cluster
PDF
NATS: Simple, Secure and Scalable Messaging For the Cloud Native Era
PDF
Implementing Microservices with NATS
MongoDB .local London 2019: New Encryption Capabilities in MongoDB 4.2: A Dee...
Introducing Stitch
Using NGINX and NGINX Plus as a Kubernetes Ingress
What's new in NGINX Plus R19
MongoDB .local Bengaluru 2019: The Journey of Migration from Oracle to MongoD...
A Free New World: Atlas Free Tier and How It Was Born
.NET Fest 2017. Андрей Антиликаторов. Проектирование и разработка приложений ...
Developing Business Blockchain Applications on Hyperledger
MongoDB Launchpad 2016: Moving Cybersecurity to the Cloud
.NET Fest 2017. Денис Резник. Исполнение Запроса в SQL Server. Ожидание - Реа...
Hyperledger Fabric & Composer
Hyperledger Composer
Digital Forensics and Incident Response in The Cloud Part 3
Building the Real-Time Performance Panel
Role Based Access Controls (RBAC) for SSH and Kubernetes Access with Teleport
Opentracing jaeger
Meetup Microservices Commandments
Sizing Your MongoDB Cluster
NATS: Simple, Secure and Scalable Messaging For the Cloud Native Era
Implementing Microservices with NATS
Ad

Similar to Loggly - Benchmarking 5 Node.js Logging Libraries (8)

PDF
Loggly - 5 Popular .NET Logging Libraries
PDF
Observable Node.js Applications - EnterpriseJS
PDF
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
PDF
The burden of a successful feature: Scaling our real time logging platform
KEY
Message:Passing - lpw 2012
PDF
OSMC 2023 | Large-scale logging made easy by Alexandr Valialkin
ODP
Log Management Systems
PDF
Jdd2014: High performance logging - Peter Lawrey
Loggly - 5 Popular .NET Logging Libraries
Observable Node.js Applications - EnterpriseJS
Large-scale Logging Made Easy: Meetup at Deutsche Bank 2024
The burden of a successful feature: Scaling our real time logging platform
Message:Passing - lpw 2012
OSMC 2023 | Large-scale logging made easy by Alexandr Valialkin
Log Management Systems
Jdd2014: High performance logging - Peter Lawrey
Ad

More from SolarWinds Loggly (10)

PDF
Loggly - IT Operations in a Serverless World (Infographic)
PDF
Loggly - Case Study - Loggly and Docker Deliver Powerful Monitoring for XAPPm...
PDF
Loggly - Case Study - Stanley Black & Decker Transforms Work with Support fro...
PDF
Loggly - Case Study - Loggly and Kubernetes Give Molecule Easy Access to the ...
PDF
Loggly - Case Study - Datami Keeps Developer Productivity High with Loggly
PDF
Loggly - Case Study - BEMOBI - Bemobi Monitors the Experience of 500 Million ...
PDF
Why @Loggly Loves Apache Kafka, and How We Use Its Unbreakable Messaging for ...
PDF
6 Critical SaaS Engineering Mistakes to Avoid
PDF
Rumble Entertainment GDC 2014: Maximizing Revenue Through Logging
PDF
AWS re:Invent presentation: Unmeltable Infrastructure at Scale by Loggly
Loggly - IT Operations in a Serverless World (Infographic)
Loggly - Case Study - Loggly and Docker Deliver Powerful Monitoring for XAPPm...
Loggly - Case Study - Stanley Black & Decker Transforms Work with Support fro...
Loggly - Case Study - Loggly and Kubernetes Give Molecule Easy Access to the ...
Loggly - Case Study - Datami Keeps Developer Productivity High with Loggly
Loggly - Case Study - BEMOBI - Bemobi Monitors the Experience of 500 Million ...
Why @Loggly Loves Apache Kafka, and How We Use Its Unbreakable Messaging for ...
6 Critical SaaS Engineering Mistakes to Avoid
Rumble Entertainment GDC 2014: Maximizing Revenue Through Logging
AWS re:Invent presentation: Unmeltable Infrastructure at Scale by Loggly

Recently uploaded (20)

PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Encapsulation theory and applications.pdf
DOCX
The AUB Centre for AI in Media Proposal.docx
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PPTX
Big Data Technologies - Introduction.pptx
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
Electronic commerce courselecture one. Pdf
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
KodekX | Application Modernization Development
PPTX
Cloud computing and distributed systems.
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Unlocking AI with Model Context Protocol (MCP)
Encapsulation theory and applications.pdf
The AUB Centre for AI in Media Proposal.docx
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Big Data Technologies - Introduction.pptx
“AI and Expert System Decision Support & Business Intelligence Systems”
Reach Out and Touch Someone: Haptics and Empathic Computing
Building Integrated photovoltaic BIPV_UPV.pdf
Network Security Unit 5.pdf for BCA BBA.
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Electronic commerce courselecture one. Pdf
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
KodekX | Application Modernization Development
Cloud computing and distributed systems.
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf

Loggly - Benchmarking 5 Node.js Logging Libraries

  • 1. 16/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG: A Benchmark of Five Node.js Logging Libraries Presented by:
  • 2. 26/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG: Introduction to our benchmark We compared performance and reliability among three popular debug logging libraries. It also compares performance between two popular Express request logging libraries. Measure the time required to log a large number of messages and compare the reliability of these libraries by recording message drop rates. GOAL loggly.com/node-js-benchmark/
  • 3. 36/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG: Libraries tested HTTP request logging with Express • Express-Winston • Morgan The following libraries were tested: Debug-level logging • Bunyan • Log4js • Winston loggly.com/node-js-benchmark/
  • 4. 46/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG: Setup and configuration: Test structure DEBUG Logging Request Logging Syslog (UDP and TCP) 100,000 DEBUG logs over 10 iterations 100,000 requests over 10 iterations, maximum 500 concurrent connections File transport 10,000 DEBUG logs over 100 iterations — For the debug-level logging tests, a list of prime numbers was being generated in the background to simulate a workload. For the request logging tests, an Express app was created that returned the plain text “Hello World” when requested. The benchmark results were found using ApacheBench. loggly.com/node-js-benchmark/
  • 5. 56/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG: Bunyan: • Fastest of the three • Ran 26% faster than Log4js, 34% faster than Winston • Drop rate: 0% for all three libraries Debug logging test results File transport Each library produced similar results. Log4jsBunyan Winston Time(ms) 282 381 426 460 0 100 200 300 400 Drop Rate 0% Drop Rate 0% Drop Rate 0% loggly.com/node-js-benchmark/
  • 6. 66/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG: Debug logging test results Syslog transport: TCP • Log4js is the fastest library for the TCP protocol. • Tradeoff: <1% packet drop • Bunyan was the fastest library with a 0% drop rate • Winston had the highest average time and the amount of fluctuation between tests Log4jsBunyan Winston Time(ms) 12,210 9,022 11,392 1200 0 200 400 800 1000 600 1400 Drop Rate 0% Drop Rate <1% Drop Rate 0% loggly.com/node-js-benchmark/
  • 7. 76/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG: Debug logging test results UDP For the UDP tests, the libraries proved to have striking differences in both efficiency and reliability Log4js: • Fastest library — smallest drop rate • 36% faster than Bunyan, 71% faster than Winston • Drop rate: 16% less than Bunyan, 25% less than Winston Log4jsBunyan Winston Time(ms) 4,769 3,537 10,537 1200 0 200 400 800 1000 600 Drop Rate 43% Drop Rate 27% Drop Rate 52% loggly.com/node-js-benchmark/
  • 8. 86/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG: A note on drop rates • Since the system was being pushed to its limits for testing purposes, the syslog drop rates seen below are greater than you should ever experience in a real-world application • Requests sent to a real-world application are normally spread out over time, which allows the system to catch up loggly.com/node-js-benchmark/
  • 9. 96/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG: HTTP request logging test results Syslog transport: TCP Morgan processed each request 56% faster than Winston, which suggests that it would be the ideal request logging library in a real-world application. Average time required to process each individual requestAverage total request time per iteration of 100,000 requests Morgan Winston Time(ms) 47,360 91,628 100,000 0 20,000 40,000 60,000 80,000 Drop Rate 0% Drop Rate 0% Morgan Winston Time(ms) 108 243 300 0 100 150 200 250 50 Drop Rate 0%Drop Rate 0% loggly.com/node-js-benchmark/
  • 10. 106/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG: HTTP request logging test results Syslog transport: UDP As with TCP, Morgan runs more efficiently timewise compared to Winston. Average time required to process each individual requestAverage total request time per iteration of 100,000 requests Drop Rate <1%Drop Rate <1% Morgan Winston Time(ms) 50,002 62,37564,000 44,000 48,000 52,000 56,000 60,000 Morgan Winston Time(ms) 129 163 200 0 100 125 150 175 50 75 Drop Rate <1%Drop Rate <1% loggly.com/node-js-benchmark/
  • 11. 116/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG: Conclusion • With Node.js logging libraries, tradeoff between efficiency and reliability is not always significant • When outputting debug logs to a file, the most efficient library timewise is Bunyan • When using the syslog protocol, Log4js is the fastest of the three libraries for both TCP and UDP. • May choose to use Bunyan over Log4js for TCP since Bunyan had 0% drop rate • For request logging, Morgan always supersedes Winston for efficiency loggly.com/node-js-benchmark/
  • 12. 126/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG: Sending Node.js log data to Loggly • When sending Node.js logs to Loggly, we recommend the winston- loggly-bulk library • Loggly focused on Winston due to its popularity and high ratings from developers See the documentation here. loggly.com/node-js-benchmark/
  • 13. 136/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG: About the author Lukas Rossi is an experienced front- and back-end web designer and developer who has developed numerous mobile applications and websites. Lukas discovered an interest in web development during high school, and since then he hasn't stopped learning the latest technologies. He is the manager of App Dimensions [appdimensions.com], a company that creates mobile apps to improve everyday life. In his free time, Lukas enjoys bike riding and various other outdoor activities. loggly.com/node-js-benchmark/
  • 14. 146/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG: Appendix
  • 15. 156/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG: Setup and configuration: Additional details on test structure Syslog (UDP and TCP) File DEBUG Logging 100,000 DEBUG logs over 10 iterations 10,000 DEBUG logs over 100 iterations Request Logging 100,000 requests over 10 iterations, maximum 500 concurrent connections — The following libraries were tested: All of the libraries were tested and compared using syslog UDP and TCP (rsyslog was used in the tests), as well as file transports for the debug-level logging tests. Benchmarking with the file transport contained more iterations (with less runs per iteration) to eliminate overload. For each specific test, a total of 1 million log events was logged. Each specific test was run three times, and the results were then averaged. Log data was sent to a local server; therefore, the tests are not affected by network lag. For the debug-level logging tests, a list of prime numbers was being generated in the background to simulate a workload. For the request logging tests, an Express app was created that returned the plain text “Hello World” when requested. The benchmark results were found using ApacheBench.
  • 16. 166/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG: Setup and configuration: Server hardware All of the tests were performed on a server with burstable high frequency Intel Xeon processors and 4GiB of RAM. The Node.js processes, rsyslog, and ApacheBench were all running on this server, so there was no network latency affecting the tests. All of the libraries were tested and compared using syslog UDP and TCP (rsyslog was used in the tests), as well as file transports for the debug-level logging tests. Benchmarking with the file transport contained more iterations (with less runs per iteration) to eliminate overload. For each specific test, a total of 1 million log events was logged. Each specific test was run three times, and the results were then averaged. Log data was sent to a local server; therefore, the tests are not affected by network lag. loggly.com/node-js-benchmark/
  • 17. 176/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG: Logging format and library configuration The standard logging format for each specific library was used for the tests. The configuration for each library was kept as close to default as possible, so keep in mind that customizing the configuration to suit your specific application may improve each library’s performance. Many of the libraries used require additional code in order to log with syslog. The following libraries were added: “Winston-syslog” (https://github.com/winstonjs/winston-syslog) for debug-level syslog logging with Winston “Winston-rsyslog2” (https://github.com/imsky/winston-rsyslog2) for request logging with syslog with Winston “Log4js-tcp” (https://github.com/tianyk/log4js-tcp) for TCP logging with Log4js “Node-bunyan-syslog” (https://github.com/mcavage/node-bunyan-syslog) for syslog logging with Bunyan “Node-syslog” (https://github.com/cloudhead/node-syslog) for TCP logging with Morgan “Node-syslogudp” (https://www.npmjs.com/package/syslogudp) for UDP logging with Morgan loggly.com/node-js-benchmark/
  • 18. 186/28/17© 2017 Loggly Inc. Confidential and ProprietaryMORE DETAILS ON OUR BLOG: So you implemented logging. Now what? Centralize all of your log data and see what matters fast. Try Loggly for free! Loggly is the world’s most popular cloud-based, enterprise-class log management service, serving more than 10,000 customers including one-third of the Fortune 500. The Loggly service integrates into the engineering processes of teams employing continuous deployment and DevOps practices to reduce MTTR, improve service quality, accelerate innovation, and make better use of valuable development resources. loggly.com/node-js-benchmark/