Showing posts with label firmware. Show all posts
Showing posts with label firmware. Show all posts

Daily Tech Digest - July 20, 2025


Quote for the day:

“Wisdom equals knowledge plus courage. You have to not only know what to do and when to do it, but you have to also be brave enough to follow through.” -- Jarod Kintz


Lean Agents: The Agile Workforce of Agentic AI

Organizations are tired of gold‑plated mega systems that promise everything and deliver chaos. Enter frameworks like AutoGen and LangGraph, alongside protocols such as MCP; all enabling Lean Agents to be spun up on-demand, plug into APIs, execute a defined task, then quietly retire. This is a radical departure from heavyweight models that stay online indefinitely, consuming compute cycles, budget, and attention. ... Lean Agents are purpose-built AI workers; minimal in design, maximally efficient in function. Think of them as stateless or scoped-memory micro-agents: they wake when triggered, perform a discrete task like summarizing an RFP clause or flagging anomalies in payments and then gracefully exit, freeing resources and eliminating runtime drag. Lean Agents are to AI what Lambda functions are to code: ephemeral, single-purpose, and cloud-native. They may hold just enough context to operate reliably but otherwise avoid persistent state that bloats memory and complicates governance. ... From technology standpoint, combined with the emerging Model‑Context Protocol (MCP) give engineering teams the scaffolding to create discoverable, policy‑aware agent meshes. Lean Agents transform AI from a monolithic “brain in the cloud” into an elastic workforce that can be budgeted, secured, and reasoned about like any other microservice.


Cloud Repatriation Is Harder Than You Think

Repatriation is not simply a reverse lift-and-shift process. Workloads that have developed in the cloud often have specific architectural dependencies that are not present in on-premises environments. These dependencies can include managed services like identity providers, autoscaling groups, proprietary storage solutions, and serverless components. As a result, moving a workload back on-premises typically requires substantial refactoring and a thorough risk assessment. Untangling these complex layers is more than just a migration; it represents a structural transformation. If the service expectations are not met, repatriated applications may experience poor performance or even fail completely. ... You cannot migrate what you cannot see. Accurate workload planning relies on complete visibility, which includes not only documented assets but also shadow infrastructure, dynamic service relationships, and internal east-west traffic flows. Static tools such as CMDBs or Visio diagrams often fall out of date quickly and fail to capture real-time behavior. These gaps create blind spots during the repatriation process. Application dependency mapping addresses this issue by illustrating how systems truly interact at both the network and application layers. Without this mapping, teams risk disrupting critical connections that may not be evident on paper.


AI Agents Are Creating a New Security Nightmare for Enterprises and Startups

The agentic AI landscape is still in its nascent stages, making it the opportune moment for engineering leaders to establish robust foundational infrastructure. While the technology is rapidly evolving, the core patterns for governance are familiar: Proxies, gateways, policies, and monitoring. Organizations should begin by gaining visibility into where agents are already running autonomously — chatbots, data summarizers, background jobs — and add basic logging. Even simple logs like “Agent X called API Y” are better than nothing. Routing agent traffic through existing proxies or gateways in a reverse mode can eliminate immediate blind spots. Implementing hard limits on timeouts, max retries, and API budgets can prevent runaway costs. While commercial AI gateway solutions are emerging, such as Lunar.dev, teams can start by repurposing existing tools like Envoy, HAProxy, or simple wrappers around LLM APIs to control and observe traffic. Some teams have built minimal “LLM proxies” in days, adding logging, kill switches, and rate limits. Concurrently, defining organization-wide AI policies — such as restricting access to sensitive data or requiring human review for regulated outputs — is crucial, with these policies enforced through the gateway and developer training.


The Evolution of Software Testing in 2025: A Comprehensive Analysis

The testing community has evolved beyond the conventional shift-left and shift-right approaches to embrace what industry leaders term "shift-smart" testing. This holistic strategy recognizes that quality assurance must be embedded throughout the entire software development lifecycle, from initial design concepts through production monitoring and beyond. While shift-left testing continues to emphasize early validation during development phases, shift-right testing has gained equal prominence through its focus on observability, chaos engineering, and real-time production testing. ... Modern testing platforms now provide insights into how testing outcomes relate to user churn rates, release delays, and net promoter scores, enabling organizations to understand the direct business impact of their quality assurance investments. This data-driven approach transforms testing from a technical activity into a business-critical function with measurable value.Artificial intelligence platforms are revolutionizing test prioritization by predicting where failures are most likely to occur, allowing testing teams to focus their efforts on the highest-risk areas. ... Modern testers are increasingly taking on roles as quality coaches, working collaboratively with development teams to improve test design and ensure comprehensive coverage aligned with product vision. 


7 lessons I learned after switching from Google Drive to a home NAS

One of the first things I realized was that a NAS is only as fast as the network it’s sitting on. Even though my NAS had decent specs, file transfers felt sluggish over Wi-Fi. The new drives weren’t at fault, but my old router was proving to be a bottleneck. Once I wired things up and upgraded my router, the difference was night and day. Large files opened like they were local. So, if you’re expecting killer performance, make sure to look out for the network box, because it perhaps matters just as much  ... There was a random blackout at my place, and until then, I hadn’t hooked my NAS to a power backup system. As a result, the NAS shut off mid-transfer without warning. I couldn’t tell if I had just lost a bunch of files or if the hard drives had been damaged too — and that was a fair bit scary. I couldn’t let this happen again, so I decided to connect the NAS to an uninterruptible power supply unit (UPS).  ... I assumed that once I uploaded my files to Google Drive, they were safe. Google would do the tiring job of syncing, duplicating, and mirroring on some faraway data center. But in a self-hosted environment, you are the one responsible for all that. I had to put safety nets in place for possible instances where a drive fails or the NAS dies. My current strategy involves keeping some archived files on a portable SSD, a few important folders synced to the cloud, and some everyday folders on my laptop set up to sync two-way with my NAS.


5 key questions your developers should be asking about MCP

Despite all the hype about MCP, here’s the straight truth: It’s not a massive technical leap. MCP essentially “wraps” existing APIs in a way that’s understandable to large language models (LLMs). Sure, a lot of services already have an OpenAPI spec that models can use. For small or personal projects, the objection that MCP “isn’t that big a deal” is pretty fair. ... Remote deployment obviously addresses the scaling but opens up a can of worms around transport complexity. The original HTTP+SSE approach was replaced by a March 2025 streamable HTTP update, which tries to reduce complexity by putting everything through a single /messages endpoint. Even so, this isn’t really needed for most companies that are likely to build MCP servers. But here’s the thing: A few months later, support is spotty at best. Some clients still expect the old HTTP+SSE setup, while others work with the new approach — so, if you’re deploying today, you’re probably going to support both. Protocol detection and dual transport support are a must. ... However, the biggest security consideration with MCP is around tool execution itself. Many tools need broad permissions to be useful, which means sweeping scope design is inevitable. Even without a heavy-handed approach, your MCP server may access sensitive data or perform privileged operations


Firmware Vulnerabilities Continue to Plague Supply Chain

"The major problem is that the device market is highly competitive and the vendors [are] competing not only to the time-to-market, but also for the pricing advantages," Matrosov says. "In many instances, some device manufacturers have considered security as an unnecessary additional expense." The complexity of the supply chain is not the only challenge for the developers of firmware and motherboards, says Martin Smolár, a malware researcher with ESET. The complexity of the code is also a major issue, he says. "Few people realize that UEFI firmware is comparable in size and complexity to operating systems — it literally consists of millions of lines of code," he says. ... One practice that hampers security: Vendors will often try to only distribute security fixes under a non-disclosure agreement, leaving many laptop OEMs unaware of potential vulnerabilities in their code. That's the exact situation that left Gigabyte's motherboards with a vulnerable firmware version. Firmware vendor AMI fixed the issues years ago, but the issues have still not propagated out to all the motherboard OEMs. ... Yet, because firmware is always evolving as better and more modern hardware is integrated into motherboards, the toolset also need to be modernized, Cobalt's Ollmann says.


Beyond Pilots: Reinventing Enterprise Operating Models with AI

Historically, AI models required vast volumes of clean, labeled data, making insights slow and costly. Large language models (LLMs) have upended this model, pre-trained on billions of data points and able to synthesize organizational knowledge, market signals, and past decisions to support complex, high-stakes judgment. AI is becoming a powerful engine for revenue generation through hyper-personalization of products and services, dynamic pricing strategies that react to real-time market conditions, and the creation of entirely new service offerings. More significantly, AI is evolving from completing predefined tasks to actively co-creating superior customer experiences through sophisticated conversational commerce platforms and intelligent virtual agents that understand context, nuance, and intent in ways that dramatically enhance engagement and satisfaction. ... In R&D and product development, AI is revolutionizing operating models by enabling faster go-to-market cycles. AI can simulate countless design alternatives, optimize complex supply chains in real time, and co-develop product features based on deep analysis of customer feedback and market trends. These systems can draw from historical R&D successes and failures across industries, accelerating innovation by applying lessons learned from diverse contexts and domains.


Alternative clouds are on the rise

Alt clouds, in their various forms, represent a departure from the “one size fits all” mentality that initially propelled the public cloud explosion. These alternatives to the Big Three prioritize specificity, specialization, and often offer an advantage through locality, control, or workload focus. Private cloud, epitomized by offerings from VMware and others, has found renewed relevance in a world grappling with escalating cloud bills, data sovereignty requirements, and unpredictable performance from shared infrastructure. The old narrative that “everything will run in the public cloud eventually” is being steadily undermined as organizations rediscover the value of dedicated infrastructure, either on-premises or in hosted environments that behave, in almost every respect, like cloud-native services. ... What begins as cost optimization or risk mitigation can quickly become an administrative burden, soaking up engineering time and escalating management costs. Enterprises embracing heterogeneity have no choice but to invest in architects and engineers who are familiar not only with AWS, Azure, or Google, but also with VMware, CoreWeave, a sovereign European platform, or a local MSP’s dashboard. 


Making security and development co-owners of DevSecOps

In my view, DevSecOps should be structured as a shared responsibility model, with ownership but no silos. Security teams must lead from a governance and risk perspective, defining the strategy, standards, and controls. However, true success happens when development teams take ownership of implementing those controls as part of their normal workflow. In my career, especially while leading security operations across highly regulated industries, including finance, telecom, and energy, I’ve found this dual-ownership model most effective. ... However, automation without context becomes dangerous, especially closer to deployment. I’ve led SOC teams that had to intervene because automated security policies blocked deployments over non-exploitable vulnerabilities in third-party libraries. That’s a classic example where automation caused friction without adding value. So the balance is about maturity: automate where findings are high-confidence and easily fixable, but maintain oversight in phases where risk context matters, like release gates, production changes, or threat hunting. ... Tools are often dropped into pipelines without tuning or context, overwhelming developers with irrelevant findings. The result? Fatigue, resistance, and workarounds.

Daily Tech Digest - June 02, 2023

A Data Scientist’s Essential Guide to Exploratory Data Analysis

Analyzing the individual characteristics of each feature is crucial as it will help us decide on their relevance for the analysis and the type of data preparation they may require to achieve optimal results. For instance, we may find values that are extremely out of range and may refer to inconsistencies or outliers. We may need to standardize numerical data or perform a one-hot encoding of categorical features, depending on the number of existing categories. Or we may have to perform additional data preparation to handle numeric features that are shifted or skewed, if the machine learning algorithm we intend to use expects a particular distribution. ... For Multivariate Analysis, best practices focus mainly on two strategies: analyzing the interactions between features, and analyzing their correlations. ... Interactions let us visually explore how each pair of features behaves, i.e., how the values of one feature relate to the values of the other. 


Resilient data backup and recovery is critical to enterprise success

So, what must IT leaders consider? The first step is to establish data protection policies that include encryption and least privilege access permissions. Businesses should then ensure they have three copies of their data – the production copy already exists and is effectively the first copy. The second copy should be stored on a different media type, not necessarily in a different physical location (the logic behind it is to not store your production and backup data in the same storage device). The third copy could or should be an offsite copy that is also offline, air-gapped, or immutable (Amazon S3 with Object Lock is one example). Organizations also need to make sure they have a centralized view of data protection across all environments for greater management, monitoring and governance, and they need orchestration tools to help automate data recovery. Finally, organizations should conduct frequent backup and recovery testing to make sure that everything works as it should.


Data Warehouse Architecture Types

Different architectural approaches offer unique advantages and cater to varying business requirements. In this comprehensive guide, we will explore different data warehouse architecture types, shedding light on their characteristics, benefits, and considerations. Whether you are building a new data warehouse or evaluating your existing architecture, understanding these options will empower you to make informed decisions that align with your organization’s goals. ... Selecting the right data warehouse architecture is a critical decision that directly impacts an organization’s ability to leverage its data assets effectively. Each architecture type has its own strengths and considerations, and there is no one-size-fits-all solution. By understanding the characteristics, benefits, and challenges of different data warehouse architecture types, businesses can align their architecture with their unique requirements and strategic goals. Whether it’s a traditional data warehouse, hub-and-spoke model, federated approach, data lake architecture, or a hybrid solution, the key is to choose an architecture that empowers data-driven insights, scalability, agility, and flexibility.


What is federated Identity? How it works and its importance to enterprise security

FIM has many benefits, including reducing the number of passwords a user needs to remember, improving their user experience and improving security infrastructure. On the downside, federated identity does introduce complexity into application architecture. This complexity can also introduce new attack surfaces, but on balance, properly implemented federated identity is a net improvement to application security. In general, we can see federated identity as improving convenience and security at the cost of complexity. ... Federated single sign-on allows for sharing credentials across enterprise boundaries. As such, it usually relies on a large, well-established entity with widespread security credibility, organizations such as Google, Microsoft, and Amazon, for example. In this case, applications are usually gaining not just a simplified login experience for their users, but the impression and actual reliance on high-level security infrastructure. Put another way, even a small application can add “Sign in with Google” to its login flow relatively easily, giving users a simple login option, which keeps sensitive information in the hands of the big organization.


Millions of PC Motherboards Were Sold With a Firmware Backdoor

Given the millions of potentially affected devices, Eclypsium’s discovery is “troubling,” says Rich Smith, who is the chief security officer of supply-chain-focused cybersecurity startup Crash Override. Smith has published research on firmware vulnerabilities and reviewed Eclypsium’s findings. He compares the situation to the Sony rootkit scandal of the mid-2000s. Sony had hidden digital-rights-management code on CDs that invisibly installed itself on users’ computers and in doing so created a vulnerability that hackers used to hide their malware. “You can use techniques that have traditionally been used by malicious actors, but that wasn’t acceptable, it crossed the line,” Smith says. “I can’t speak to why Gigabyte chose this method to deliver their software. But for me, this feels like it crosses a similar line in the firmware space.” Smith acknowledges that Gigabyte probably had no malicious or deceptive intent in its hidden firmware tool. But by leaving security vulnerabilities in the invisible code that lies beneath the operating system of so many computers, it nonetheless erodes a fundamental layer of trust users have in their machines. 


Minimising the Impact of Machine Learning on our Climate

There are several things we can do to mitigate the negative impact of software on our climate. They will be different depending on your specific scenario. But what they all have in common is that they should strive to be energy-efficient, hardware-efficient and carbon-aware. GSF is gathering patterns for different types of software systems; these have all been reviewed by experts and agreed on by all member organisations before being published. In this section we will cover some of the patterns for machine learning as well as some good practices which are not (yet?) patterns. If we divide the actions after the ML life cycle, or at least a simplified version of it, we get four categories: Project Planning, Data Collection, Design and Training of ML model and finally, Deployment and Maintenance. The project planning phase is the time to start asking the difficult questions, think about what the carbon impact of your project will be and how you plan to measure it. This is also the time to think about your SLA; overcommitting to strict latency or performance metrics that you actually don’t need can quickly become a source of emission you can avoid.


5 ways AI can transform compliance

Compliance is all about controls. Data must be classified according to multiple rules, and the movement and access to that data recorded. It’s the perfect task for AI. Ville Somppi, vice president of industry solutions at M-Files, says: “Thanks to AI, organisations can automatically classify information and apply pre-defined compliance rules. In the case of choosing the right document category from a compliance perspective, the AI can be trained quickly with a small sample set categorised by people. This is convenient, especially when people can still correct wrong suggestions in the beginning of the learning process. ... Data pools are too big for humans to comb through. AI is the only way. In some sectors, adoption of AI has been delayed owing to regulatory issues. However, full deployment ought now to be possible. Gabriel Hopkins chief product officer at Ripjar, says: “Banks and financial services companies face complex responsibilities when it comes to compliance activities, especially with regard to combatting the financing of terrorism and preventing laundering or criminal proceeds.


Former Uber CSO Sullivan on Engaging the Security Community

CISO is a lonely role. There's a really amazing camaraderie between security executives that I'm not sure exists in any other kind of leadership role. The CISO role is pretty new compared to the other leadership roles. It's far from settled what kind of background is ideal for the role. It's far from settled where the person in the role should report. It’s far from settled what kind of a budget you're going to get. It's far from settled in terms of what type of decision-making power you're going to have. So, as a result, I think security leaders often feel lonely and on an island. They have an executive team above them that expects them to know all the answers about security, and then they have a team underneath them that expects them to know all the answers about security. So, they can't betray ignorance to anybody without undermining their role. And so, the security leader community often turns to each other for support, for guidance. There are a good number of Slack channels and conferences that are just CISOs talking through the role and asking for best practices and advice on how to deal with hard situations.


Google Drive Deficiency Allows Attackers to Exfiltrate Workspace Data Without a Trace

Mitiga reached out to Google about the issue, but the researchers said they have not yet received a response, adding that Google's security team typically doesn't recognize forensics deficiencies as a security problem. This highlights a concern when working with software-as-a-service (SaaS) and cloud providers, in that organizations that use their services "are solely dependent on them regarding what forensic data you can have," Aspir notes. "When it comes to SaaS and cloud providers, we’re talking about a shared responsibility regarding security because you can't add additional safeguards within what is given." ... Fortunately, there are steps that organizations using Google Workspace can take to ensure that the issue outlined by Mitiga isn't exploited, the researchers said. This includes keeping an eye out for certain actions in their Admin Log Events feature, such as events about license assignments and revocations, they said.


How defense contractors can move from cybersecurity to cyber resilience

We’re thinking way too small about a coordinated cyberattack’s capacity for creating major disruption to our daily lives. One recent, vivid illustration of that fact happened in 2022, when the Russia-linked cybercrime group Conti launched a series of prolonged attacks on the core infrastructure of the country of Costa Rica, plunging the country into chaos for months. Over a period of two weeks, Conti tried to breach different government organizations nearly every day, targeting a total of 27 agencies. Soon after that, the group launched a separate attack on the country’s health care system, causing tens of thousands of appointments to be canceled and patients to experience delays in getting treatment. The country declared a national emergency and eventually, with the help of allies around the world including the United States and Microsoft, regained control of its systems. The US federal government’s strict compliance standards often impede businesses from excelling beyond the most basic requirements. 



Quote for the day:

"Uncertainty is not an indication of poor leadership; it underscores the need for leadership." -- Andy Stanley

Daily Tech Digest - February 01, 2023

Top 6 roadblocks derailing data-driven projects

Making the challenge of getting sufficient funding for data projects even more daunting is the fact that they can be expensive endeavors. Data-driven projects require a substantial investment of resources and budget from inception, Clifton says. “They are generally long-term projects that can’t be applied as a quick fix to address urgent priorities,” Clifton says. “Many decision makers don’t fully understand how they work or deliver for the business. The complex nature of gathering data to use it efficiently to deliver clear [return on investment] is often intimidating to businesses because one mistake can exponentially drive costs.” When done correctly, however, these projects can streamline and save the organization time and money over the long haul, Clifton says. “That’s why it is essential to have a clear strategy for maximizing data and then ensuring that key stakeholders understand the plan and execution,” he says. In addition to investing in the tools needed to support data-driven projects, organizations need to recruit and retain professionals such as data scientists. 


IoT, connected devices biggest contributors to expanding application attack surface

Along with IoT and connected device growth, rapid cloud adoption, accelerated digital transformation, and new hybrid working models have also significantly expanded the attack surface, the report noted.  ... Inefficient visibility and contextualization of application security risks leave organizations in “security limbo” because they don’t know what to focus on and prioritize, 58% of respondents said. “IT teams are being bombarded with security alerts from across the application stack, but they simply can’t cut through the data noise,” the report read. “It’s almost impossible to understand the risk level of security issues in order to prioritize remediation based on business impact. As a result, technologists are feeling overwhelmed by new security vulnerabilities and threats.” Lack of collaboration and understanding between IT operations teams and security teams is having several negative effects too, the report found, including increased vulnerability to security threats and blind spots, difficulties balancing speed, performance and security priorities, and slow reaction times when addressing security incidents.


Firmware Flaws Could Spell 'Lights Out' for Servers

Five vulnerabilities in the baseboard management controller (BMC) firmware used in servers of 15 major vendors could give attackers the ability to remotely compromise the systems widely used in data centers and for cloud services. The vulnerabilities, two of which were disclosed this week by hardware security firm Eclypsium, occur in system-on-chip (SoC) computing platforms that use AMI's MegaRAC Baseboard Management Controller (BMC) software for remote management. The flaws could impact servers produced by at least 15 vendors, including AMD, Asus, ARM, Dell, EMC, Hewlett-Packard Enterprise, Huawei, Lenovo, and Nvidia. Eclypsium disclosed three of the vulnerabilities in December, but withheld information on two additional flaws until this week in order to allow AMI more time to mitigate the issues. Since the vulnerabilities can only be exploited if the servers are connected directly to the Internet, the extent of the vulnerabilities is hard to measure, says Nate Warfield, director of threat research and intelligence at Eclypsium. 


As the anti-money laundering perimeter expands, who needs to be compliant, and how?

Remember: It’s not just existing criminals you’re looking for, but also people that could become part of a money laundering scheme. One very specific category is politically exposed persons (PEP), which refers to government workers or high-ranking officials at risk of bribery or corruption. Another category is people in sanctioned lists, like Specially Designated Nationals (SDN) composed by the Office of Foreign Assets Control (OFAC). They contain individuals and groups with links to high-risk countries. Extra vigilance is also necessary when dealing with money service businesses (MSB), as they’re more likely to become targets for money launderers. The point of all this is that a good AML program must include a thorough screening system that can detect high-risk customers before bringing them onboard. It’s great if you can stop criminals from accessing your system at all, but sometimes they slip through or influence existing customers. That’s why checking users’ backgrounds for red flags isn’t enough. You need to keep an eye on their current activity, too.


Digital transformation: 4 essential leadership skills

Decisiveness by itself is not enough. A strong technology leader needs to operate with flexibility. The pace of change is no longer linear, and leaders have less time to assess and understand every aspect of a decision. Consequently, decisions are made faster and are not always the best ones. Realizing which decisions are not spot-on and being able to adapt quickly is an example of the type of flexibility a leader needs. Another area leaders should understand is when, how, and from whom to take input when making adjustments. For example, leaders shouldn’t rely solely on customer input to make all product decisions. A flexible leader needs to understand the impact on the development teams and support teams as well. In our experience, teams with decisive and flexible leaders are more accepting of change. This is especially true during transformation. Leaders need to know when and how to be decisive to lead their team to success. In tandem, future-ready leaders can adapt to new information and inputs in today’s fast-paced technology environment.


Pathways to a More Sustainable Data Center

“When building a data center to suit today's needs and the needs 20 years in the future, the location of the facility is a key aspect,” he says. “Does it have space to expand with customer growth? Areas to remediate and replace systems and components? Is it in an area that has an extreme weather event seasonally? Are there ways to bring more power to the facility with this growth?” He says these are just a few of the questions that need to be thought of when deploying and maintaining a data center long term. "Technology may be able to stretch the limits of what’s possible, but sustainability starts with people,” Malloy adds. “Employees that implement and follow data center best practices keep a facility running in peak performance.” He says implementing simple things such as efficient lighting, following management-oriented processes and support-oriented processes for a proper maintenance and part replacement schedule increase the longevity of the facility equipment and increase customer satisfaction. 


Enterprise architecture modernizes for the digital era

Although leading enterprise architects see the need for a tool that better reflects the way they work, they also have concerns. “Provenance and credibility are key, so you risk making the wrong decisions as an enterprise architect if there’s no accuracy in the data,” Gregory says of how EAM tools are reliant on data quality. Winfield agrees, adding: “The difficult bit is getting accurate data into the EAM.” Gartner, in its Magic Quadrant for EA Tools, reports that the EAM sector could face some consolidation, too: “Due to the importance and growth in use of models in modern business, we expect to see some major vendors in adjacent market territories make strategic moves by either buying or launching their own EA tools.” Still, some CIOs question the value of adding EAM tools to their technology portfolio alongside IT service management (ITSM) tools, for example. The Very Group’s Subburaj foresees this being a challenge. “Some business leaders will struggle to see the direct business impact,” he says. 


Career path to CTO – we map out steps to take

Successful CTOs will need a range of skills, including technical but also business attributes. “The ability to advise and steer the technology strategy that is right for the business in the current and changing market conditions is crucial,” says Ryan Sheldrake, field CTO, EMEA, at cloud security firm Lacework. “Spending and investing wisely and in a timely manner is one of the more finessed parts of being a successful CTO.” ... “To achieve a promotion to this level, you need both,” she says. “For most of the CTO assignments we deliver, a solid knowledge base in software engineering, technical, product and enterprise architecture is required, as well as knowledge of cloud technologies and information security. From a leadership perspective, candidates need excellent influencing skills, strategic thinking, commercial management skills, and the gravitas to convey a vision and motivate a team.” There are ways in which individuals can help themselves stand out. “One of the critical things I did that really helped me develop into a CTO was to have an external mentor who was already a CTO,” says Mark Benson, CTO at Logicalis UKI. 


How Good Data Management Enables Effective Business Strategies

Data governance should also not be overlooked as an important component of data management and data quality. Sometimes used interchangeably, there are important differences. If data quality, as we’ve seen, is about making sure that all data owned by an organization is complete, accurate, and ready for business use, data governance, by contrast, is about creating the framework and rules by which an organization will use the data. The main purpose of data governance is to ensure the necessary data informs crucial business functions. It is a continuous process of assessing, often through a data steward, whether data that has been cleansed, matched, merged, and made ready for business use is truly fit for its intended purpose. Data governance rests on a steady supply of high-quality data, with frameworks for security, privacy, permissions, access, and other operational concerns. A data management strategy that encompasses the elements described above with respect to data quality will empower a business environment that can successfully achieve and even surpass business goals – from improving customer and employee experiences to increasing revenue and everything in between.


What Is Policy-as-Code? An Introduction to Open Policy Agent

As business, teams, and maturity progress, we'll want to shift from manual policy definition to something more manageable and repeatable at the enterprise scale. How do we do that? First, we can learn from successful experiments in managing systems at scale:Infrastructure-as-Code (IaC): treat the content that defines your environments and infrastructure as source code. DevOps: the combination of people, process, and automation to achieve "continuous everything," continuously delivering value to end users. Policy as code uses code to define and manage policies, which are rules and conditions. Policies are defined, updated, shared, and enforced using code and leveraging Source Code Management (SCM) tools. By keeping policy definitions in source code control, whenever a change is made, it can be tested, validated, and then executed. The goal of PaC is not to detect policy violations but to prevent them. This leverages the DevOps automation capabilities instead of relying on manual processes, allowing teams to move more quickly and reducing the potential for mistakes due to human error.



Quote for the day:

"Those who are not true leaders will just affirm people at their own immature level." -- Richard Rohr

Daily Tech Digest - December 31, 2021

Can blockchain solve its oracle problem?

The so-called oracle problem may not be intractable, however — despite what Song suggests. “Yes, there is progress,” says Halaburda. “In supply-chain oracles, we have for example sensors with their individual digital signatures. We are learning about how many sensors there need to be, and how to distinguish manipulation from malfunction from multiple readings.” “We are also getting better in writing contracts taking into account these different cases, so that the manipulation is less beneficial,” Halaburda continues. “In DeFi, we also have multiple sources, and techniques to cross-validate. While we are making progress, though, we haven’t gotten to the end of the road yet.” As noted, oracles are critical to the emerging DeFi sector. “In order for DeFi applications to work and provide value to people and organizations around the world, they require information from the real world — like pricing data for derivatives,” Sam Kim, partner at Umbrella Network — a decentralized layer-two oracle solution — tells Magazine, adding:


Putting the trust back in software testing in 2022

Millions of organisations rely on manual processes to check the quality of their software applications, despite a fully manual approach presenting a litany of problems. Firstly, with more than 70% of outages caused by human error, testing software manually still leaves companies highly prone to issues. Secondly, it is exceptionally resource-intensive and requires specialist skills. Given the world is in the midst of an acute digital talent crisis, many businesses lack the personnel to dedicate to manual testing. Compounding this challenge is the intrinsic link between software development and business success. With companies coming under more pressure than ever to release faster and more regularly, the sheer volume of software needing testing has skyrocketed, placing a further burden on resources already stretched to breaking point. Companies should be testing their software applications 24/7 but the resource-heavy nature of manual testing makes this impossible. It is also demotivating to perform repeat tasks, which generally leads to critical errors in the first place. 


December 2021 Global Tech Policy Briefing

CISA and the National Security Administration (NSA), in the meantime, offered a second revision to their 5G cybersecurity guidance on December 2. According to CISA’s statement, “Devices and services connected through 5G networks transmit, use, and store an exponentially increasing amount of data. This third installment of the Security Guidance for 5G Cloud Infrastructures four-part series explains how to protect sensitive data from unauthorized access.” The new guidelines run on zero-trust principles and reflect the White House’s ongoing concern with national cybersecurity. ... On December 9, the European Commission proposed a new set of measures to ensure labor rights for people working on digital platforms. The proposal will focus on transparency, enforcement, traceability, and the algorithmic management of what it calls, in splendid Eurocratese, “digital labour platforms.” The number of EU citizens working for digital platforms has grown 500 percent since 2016, reaching 28 million, and will likely hit 43 million by 2025. Of the current 28 million, 59 percent work with clients or colleagues in another country. 


10 Predictions for Web3 and the Cryptoeconomy for 2022

Institutions will play a much bigger role in Defi participation — Institutions are increasingly interested in participating in Defi. For starters, institutions are attracted to higher than average interest-based returns compared to traditional financial products. Also, cost reduction in providing financial services using Defi opens up interesting opportunities for institutions. However, they are still hesitant to participate in Defi. Institutions want to confirm that they are only transacting with known counterparties that have completed a KYC process. Growth of regulated Defi and on-chain KYC attestation will help institutions gain confidence in Defi. ...  Defi insurance will emerge — As Defi proliferates, it also becomes the target of security hacks. According to London-based firm Elliptic, total value lost by Defi exploits in 2021 totaled over $10B. To protect users from hacks, viable insurance protocols guaranteeing users’ funds against security breaches will emerge in 2022. ... NFT Based Communities will give material competition to Web 2.0 social networks — NFTs will continue to expand in how they are perceived.


Firmware attack can drop persistent malware in hidden SSD area

Flex capacity is a feature in SSDs from Micron Technology that enables storage devices to automatically adjust the sizes of raw and user-allocated space to achieve better performance by absorbing write workload volumes. It is a dynamic system that creates and adjusts a buffer of space called over-provisioning, typically taking between 7% and 25% of the total disk capacity. The over-provisioning area is invisible to the operating system and any applications running on it, including security solutions and anti-virus tools. As the user launches different applications, the SSD manager adjusts this space automatically against the workloads, depending on how write or read-intensive they are. ... One attack modeled by researchers at Korea University in Seoul targets an invalid data area with non-erased information that sits between the usable SSD space and the over-provisioning (OP) area, and whose size depends on the two. The research paper explains that a hacker can change the size of the OP area by using the firmware manager, thus generating exploitable invalid data space.


'Businesses need to build threat intelligence for cybersecurity': Dipesh Kaura, Kaspersky

Organizations across industries are faced with the challenge of cybersecurity and the need to build threat intelligence holds equal importance for every business that thrives in a digital economy. While building threat intelligence is crucial, it is also necessary to have a solution that understands the threat vectors for every business, across every industry. A holistic threat intelligence solution looks at every nitty-gritty of an enterprise's security framework and gets the best actionable insights. A threat intelligence platform must capture and monitor real-time feeds from across an enterprise's digital footprint and turn them into insights to build a preventive posture, instead of a reactive one. It must diagnose and analyze security incidents on hosts and the network and signals from internal systems against unknown threats, thereby minimizing incident response time and disrupt the kill chain before critical systems and data are compromised. 


IT leadership: 3 ways to show gratitude to teams

If someone on your team takes initiative on a project, let them know that you appreciate them. Pull them aside, look them in the eye and speak truthfully about how much their extra effort means to you, the team, and the company. Make your thank-you’s genuine, direct, and personal. Most individuals value physical tokens of appreciation in addition to expressed gratitude. If you choose to offer a gift, make it as personalized as you can. For example, an Amazon gift card is nice – but a cake from their favorite bakery is even nicer. Personalization means that you’ve thought about them as a person, taken the time to consider what they like, and recognize their contributions as an individual. Contrary to the common belief that we should be lavish with our praises, I would argue that it’s better to be selective. Recognize behavior that lives up to your company’s values and reserve the recognition for situations where it is genuinely deserved. If a leader showers praise when it’s not really warranted, they devalue the praise that is given when team members actually go above and beyond.


Top 5 AI Trends That Will Shape 2022 and Beyond

Under the umbrella of technology, there are several terms with which you must be already familiar, such as artificial intelligence, machine learning, deep learning, blockchain technology, cognitive technology, data processing, data science, big data, and the list is endless. Just imagine, how would it be to survive in the pandemic outbreak if there would be no technology? What if there would be no laptops, PCs, tablets, smartphones, or any sort of gadgets during COVID-19? How would human beings earn for their survival and living? What if there would be no Netflix to binge-watch or no social media application during coronavirus? Undoubtedly, that’s extremely intimidating and intriguing at the same time. Isn’t it giving you goosebumps wondering how fast the technology is advancing? Let’s flick through some jaw-dropping statistics first. Did you know that there are more than 4.88 billion mobile phone users all across the world now? According to the technology growth statistics, almost 62% of the world’s population own a smartphone device.


Introducing the Trivergence: Transformation driven by blockchain, AI and the IoT

Blockchain is the distributed ledger technology underpinning the cryptocurrency revolution. We call it the internet of value because people can use blockchain for much more than recording crypto transactions. Distributed ledgers can store, manage and exchange anything of value — money, securities, intellectual property, deeds and contracts, music, votes and our personal data — in a secure, private and peer-to-peer manner. We achieve trust not necessarily through intermediaries like banks, stock exchanges or credit card companies but through cryptography, mass collaboration and some clever code. In short, blockchain software aggregates transaction records into batches or “blocks” of data, links and time stamps the blocks into chains that provide an immutable record of transactions with infinite levels of privacy or transparency, as desired. Each of these foundational technologies is uniquely and individually powerful. However, when viewed together, each is transformed. This is a classic case of the whole being greater than the sum of its parts.


Sustainability will be a key focus as the transport sector transitions in 2022

Delivery is also an area where we expect to see the movement towards e-fleets grow. We’ve already seen this being trialled, with parcel-delivery company DPD making the switch to a fully electric fleet in Oxford. It’s estimated that by replicating this in more cities, DPD could reduce CO2 by 42,000 tonnes by 2025. While third-party delivery companies offer retailers an efficient service, carrying as many as 320 parcels a day, this model is challenged when it comes to customers’ growing expectations they can receive deliveries within hours. Sparked by lockdowns, which led to a 48% increase in online shopping, the “rapid grocery delivery” trend looks set grow in 2022. Grocery delivery company Getir, for example, built a fleet of almost 1,000 vehicles in 2021 to service this need – and is planning to spend £100m more to expand its offering. Given the current driver recruitment crisis, which is currently affecting delivery and taxi firms, we are not expecting many other operators to invest that kind of money into building new fleets though. Instead, you are more likely to see retailers working with existing fleets. 



Quote for the day:

"Cream always rises to the top...so do good leaders." -- John Paul Warren

Daily Tech Digest - March 31, 2021

What is cyber risk quantification, and why is it important?

Put simply, the idea behind quantification is to prioritize risks according to their potential for financial loss, thus allowing responsible people in a company to create budgets based on mitigation strategies that afford the best protection and return on investment. Now to the difficult part: how to incorporate cyber risk quantification. "Risk quantification starts with the evaluation of your organization's cybersecurity risk landscape," explained Tattersall. "As risks are identified, they are annotated with a potential loss amount and frequency which feeds a statistical model that considers the probability of likelihood and the financial impact." Tattersall continued, "When assessing cybersecurity projects, risk quantification supports the use of loss avoidance as a proxy for return on investment. Investments in tighter controls, assessment practices and risk management tools are ranked by potential exposure." According to Tattersall, companies are already employing cyber risk quantification. He offered the FAIR Institute's Factor Analysis of Information Risk as an example. The FAIR Institute website mentions their platform provides a model for understanding, analyzing and quantifying cyber risk and operational risk in financial terms.


What We Know (and Don't Know) So Far About the 'Supernova' SolarWinds Attack

It's not unusual for multiple nation-state attacker groups to target the same victim organization, nor even to reside concurrently and unbeknownst to one another while conducting their intelligence-gathering operations. But Supernova and the Orion supply chain attack demonstrate how nation-states also can have similar ideas yet different methods regarding how they target and ultimately burrow into the networks of their victims. Supernova homed in on SolarWinds' Orion by exploiting a flaw in the software running on a victim's server; Sunburst did so by inserting malicious code into builds for versions of the Orion network management platform. The digitally signed builds then were automatically sent to some 18,000 federal agencies and businesses last year via a routine software update process, but the attackers ultimately targeted far fewer victims than those who received the malicious software update, with fewer than 10 federal agencies affected as well as some 40 of Microsoft's own customers. US intelligence agencies have attributed that attack to a Russian nation-state group, and many details of the attack remain unknown.


World Backup Day 2021: what businesses need to know post-pandemic

For many businesses, the shift to remote working that occurred worldwide last year due to the Covid-19 outbreak brought with it an ‘always on’, omnichannel approach to customer service. As this looks set to continue meeting the needs of consumers, organisations must consider how they can protect their data continuously, with every change, update or new piece of data protected and available in real time. “Continuous data protection (CDP) is enabling this change, saving data in intervals of seconds – rather than days or months – and giving IT teams the granularity to quickly rewind operations to just seconds before disruption occurred,” said Levonai. “Completely flexible, CDP enables an IT team to quickly recover anything, from a single file or virtual machine right up to an entire site. “As more organisations join the CDP backup revolution, data loss may one day become as harmless as an April Fool’s joke. Until then, it remains a real and present danger.”... Businesses should back up their data by starting in reverse. Effective backup really starts with the recovery requirements and aligning to the business needs for continued service.


DevOps is Not Enough for Scaling and Evolving Tech-Driven Organizations

DevOps has been an evolution of breaking silos between Development and Operations to enable technical teams to be more effective in their work. However, in most organizations we still have other silos, namely: Business (Product) and IT (Tech). "BizDevOps" can be seen as an evolution from DevOps, where the two classical big silos in organizations are broken into having teams with the product and tech disciplines needed to build a product. This evolution is happening in many organizations, most of the times these are called "Product Teams". Is it enough to maximize impact as an organization? I don't think so, and that is the focus of my DevOps Lisbon Meetup talk and ideas around sociotechnical architecture and systems thinking I have been exploring. In a nutshell: we need empowered product teams, but teams must be properly aligned with value streams, which in turn must be aligned to maximize the value exchange with the customer. To accomplish this, we need to have a more holistic view and co-design of the organization structures and technical architecture.


This CEO believes it’s time to embrace idealogical diversity and AI can help

It’s important to remember that each decision from a recruiter or hiring manager contributes to a vast dataset. AI utilizes these actions and learns the context of companies’ hiring practices. This nature makes it susceptible to bias when used improperly, so it is extremely critical to deploy AI models that are designed to minimize any adverse impact. Organizations can make sure humans are in the loop and providing feedback, steering AI to learn based on skill preferences and hiring requirements. With the ongoing curation of objective data, AI can help companies achieve recruiting efficiency while still driving talent diversity. One way hiring managers can distance themselves from political bias is by relying on AI to “score” candidates based on factors such as proficiency and experience, rather than data like where they live or where they attended college. In the future, AI might also be able to mask details such as name and gender to further reduce the risk of bias. With AI, team leaders receive an objective second opinion on hiring decisions by either confirming their favored candidate or compelling them to question whether their choice is the right one.


Why AI can’t solve unknown problems

Throughout the history of artificial intelligence, scientists have regularly invented new ways to leverage advances in computers to solve problems in ingenious ways. The earlier decades of AI focused on symbolic systems. This branch of AI assumes human thinking is based on the manipulation of symbols, and any system that can compute symbols is intelligent. Symbolic AI requires human developers to meticulously specify the rules, facts, and structures that define the behavior of a computer program. Symbolic systems can perform remarkable feats, such as memorizing information, computing complex mathematical formulas at ultra-fast speeds, and emulating expert decision-making. Popular programming languages and most applications we use every day have their roots in the work that has been done on symbolic AI. But symbolic AI can only solve problems for which we can provide well-formed, step-by-step solutions. The problem is that most tasks humans and animals perform can’t be represented in clear-cut rules.


The ‘why’ of digital transformation is the key to unlocking value

Ill-prepared digital transformation projects have ripple effects. One digitalization effort that fails to produce value doesn’t just exist in a vacuum. If a technical upgrade, cloud migration, or ERP merge results in a system that looks the same as before, with processes that aren’t delivering anything new, then the decision makers will see that lack of ROI and lose interest in any further digitalization because they believe the value just isn’t there. Imagine an IT team leader saying they want fancy new dashboards and new digital boardroom features. But a digital transformation project that ends with just implementing new dashboards doesn’t change the underlying facts about what kind of data may be read on those dashboards. And if your fancy dashboards start displaying incorrect data or gaps in data sets, you haven’t just undermined the efficacy and “cool factor” of those dashboards; you’ve also made it that much harder to salvage the credibility of the project and advocate for any new digitalization in the future. What’s the value in new dashboards if you haven’t fixed the data problems underneath?


New Security Signals study shows firmware attacks on the rise

Microsoft has created a new class of devices specifically designed to eliminate threats aimed at firmware called Secured-core PCs. This was recently extended to Server and IOT announced at this year’s Microsoft Ignite conference. With Zero Trust built in from the ground up, this means SDMs will be able to invest more of their resources in strategies and technologies that will prevent attacks in the future rather than constantly defending against the onslaught of attacks aimed at them today. The SDMs in the study who reported they have invested in secured-core PCs showed a higher level of satisfaction with their security and enhanced confidentiality, availability, and integrity of data as opposed to those not using them. Based on analysis from Microsoft threat intelligence data, secured-core PCs provide more than twice the protection from infection than non-secured-core PCs. Sixty percent of surveyed organizations who invested in secured-core PCs reported supply chain visibility and monitoring as a top concern. 


7 Traits of Incredibly Efficient Data Scientists

Believe it or not, not every data analysis requires machine learning and artificial intelligence. The most efficient way to solve a problem is to use the simplest tool possible. Sometimes, a simple Excel spreadsheet can yield the same result as a big fancy algorithm using deep learning. By choosing the right algorithms and tools from the start, a data science project becomes much more efficient. While it’s cool to impress everyone with a super complex tool, it doesn’t make sense in the long run when less time could be spent using a more simple, efficient solution. ... Doing the job right the first time is the most efficient way to complete any project. When it comes to data science, that means writing code using a strict structure that makes it easy to go back and review, debug, change, and even make your code production-ready. Clear syntax guidelines make it possible for everyone to understand everyone else’s code. However, syntax guidelines aren’t just there so you can understand someone else’s chicken scratch — they’re also there so you can focus on writing the cleanest, most efficient code possible.


How insurers can act on the opportunity of digital ecosystems

First, insurers must embrace the shift to service dominant strategies and gradually establish a culture of openness and collaboration, which will be necessary for the dynamic empowerment of all players involved. Second, insurers must bring to the platform the existing organizational capabilities required for customer-centric value propositions. This means establishing experts in the respective ecosystems—for example, in mobility, health, home, finance, or well-being—and building the technological foundations necessary to integrate partners into terms-of-service catalogs and APIs, as well as to create seamless customer journeys. Finally, insurers must engage customers and other external actors by integrating resources and engaging in service exchange for mutual value generation. My wife, for example, has just signed up for a telematics policy with an insurance company that offers not only incentives for driving behavior but also value-added services, including car sales and services. She now regularly checks whether her driving style reaches the maximum level possible.



Quote for the day:

"When we lead from the heart, we don't need to work on being authentic we just are!" -- Gordon Tredgold

Daily Tech Digest - November, 28, 2019

Cutting Cybersecurity Budgets In A Time of Growing Threats

uncaptioned
Greater spending on cybersecurity products hasn't entailed a better organizational security posture. Despite the millions of dollars spent by organizations year after year, the average cost of a cyberattack jumped by 50% between 2018 and 2019, hitting $4.6 million per incident. The percentage of cyberattacks that cost $10 million or more nearly doubled to 13% over the same period. Enterprises are using a diverse array of endpoint agents, including decryption, AV/AM and EDR. The use of multiple security products may, in fact, weaken an organization’s security position, whereby the more agents an endpoint has, the greater the probability it will get breached. This wide deployment makes it difficult to standardize a specific test to measure security and safety without sacrificing speed. Buying more cybersecurity tools tends to plunge enterprises into a costly cycle of spending more time and resources on security solutions without experiencing any parallel increase in security. However, in a mad chicken-and-egg pursuit, this trend of spending more on security products persists due to the rising costs of a security breach.



Digital transformation: Business modernization requires a new mindset

A lot of executives actually want to share their frustrations, and one of the frustrations, especially with more, let's just say, legacy-oriented organizations, I'll hear about millennials all the time. And then also the coming of centennials. In that they do want to work differently, they do think differently, and infrastructures, and also models, don't necessarily support that way of thinking and way of working. The consumerization of technology, it hasn't just affected millennials or the younger workforce, it's affected all of us. I think, anybody who has a smartphone or uses social media, or has ordered an Uber or Lyft, or DoorDash, or Postmates, you name it, we have, as human beings, radically transformed. Our brains have radically transformed as we use more of these technologies, we're multitasking, we're doing a million things. Employees get something like 200 notifications during their work day, just from their phone and social and email. So a lot of the way that we have to think about work has to change. We have to think bigger than the millennial workforce.


Hotel front desks are now a hotbed for hackers


First spotted in 2015 but appearing to be most active this year, RevengeHotels has struck at least 20 hotels in quick succession. The threat actors focus on hotels, hostels, and hospitality & tourism companies. While the majority of the RevengeHotels campaign takes place in Brazil, infections have also been detected in Argentina, Bolivia, Chile, Costa Rica, France, Italy, Mexico, Portugal, Spain, Thailand, and Turkey. The threat group deploys a range of custom Trojans in order to steal guest credit card data from infected hotel systems as well as financial information sent from third-party booking websites such as Booking.com. The attack chain begins with a phishing email sent to a hospitality organization. Professionally-written and making use of domain typo-squatting to appear legitimate, the researchers say the messages are detailed and generally impersonate real companies.  These messages contain malicious Word, Excel or PDF documents, some of which will exploit CVE-2017-0199, a Microsoft Office RCE vulnerability patched in 2017.


Regaining ROI by reducing cloud complexity

Illustration of a woman in a suit hopping across clouds in a blue sky
“The first thing is admitting that there’s an issue, which is a tough thing to do,” Linthicum acknowledges. “It essentially requires creating an ad hoc organization to get things back on track and simplified, whether that’s hiring outside specialists, or doing it internally. “The good thing about that is typically you can get 10 times ROI over a two-year period if you spend the time on reducing complexity,” he says. Even with that incentive, reducing complexity involves a cultural change: shifting to a proactive, innovative, and more thoughtful culture, which many organizations are having trouble moving towards, he warned. The most effective way to do that is really retraining, replacing, or revamping. “That’s going to be a difficult thing for most organizations,” Linthicum says. “I’ve worked with existing companies that had issues like this, and I find it was the hardest problem to solve. But it’s something that has to be solved before we can get to the proactivity, before we can get to using technology as a force multiplier, before we can get to the points of innovation.”


Top 5 SD-WAN Takeaways for 2019
Auto failover, redundancy, simplified management, and cost savings topped the list of factors driving SD-WAN adoption, according to Avant Communications’ SD-WAN report. “It is Avant’s belief that SD-WAN will continue to make ongoing incursions into the higher-end enterprise, beginning at remote offices and other edges of the network, and then reaching steadily closer toward the core,” the report reads. One of the biggest promises made by many SD-WAN vendors is that the technology will reduce costs by shifting bandwidth off of — and in some cases eliminating the need for — expensive MPLS connections. And while this can be true, with more than half of companies surveyed in the aforementioned Avant report indicating that cost savings over MPLS was a key concern, the majority were still split on whether to keep or replace their MPLS connections in favor of SD-WAN and broadband internet. Roughly 40% of those surveyed said they planned to use a hybrid solution that combines the two.


Autonomous systems, aerial robotics and Game of Drones

Now, automation has basically enabled a level of productivity that you see today. But automation is very fragile, inflexible, expensive… it’s very cumbersome. Once you set them up and when everything is working well, it’s fantastic, and that is what we live with today. You know, autonomous systems, we think, can actually make that a lot easier. Now the broad industry is really still oriented toward automation. So we have to bring that industry over slowly into this autonomous world. And what’s interesting is, while these folks are experts in mechanical engineering and operations research and, you know, all those kind of important capabilities and logistics, they don’t know AI very much.  ... They don’t know how to create horizontal tool chains which enable efficient development and operations of these type of systems. So that’s the expertise we bring. I’d add one more point to it, is that the places we are seeing autonomous systems being built, like autonomous driving, they’re actually building it in a very, very vertical way.


How Machine Learning Enhances Performance Engineering and Testing


During testing, there are numerous signs that an application is producing a performance anomaly, such as delayed response time, increased latency, hanging, freezing, or crashing systems, and decreased throughput. The root cause of these issues can be traced to any number of sources, including operator errors, hardware/software failures, over- or under-provisioning of resources, or unexpected interactions between system components in different locations. There are three types of performance anomalies that performance testing experts look out for. ... Machine learning can be used to help determine statistical models of "normal" behavior in a piece of software. They are also invaluable for predicting future values and comparing them against the values being collected in real-time, which means they are constantly redefining what "normal" behavior entails. A great advantage of machine learning algorithms is that they learn over time. When new data is received, the model can adapt automatically and help define what "normal" is month-to-month or week-to-week.


How Microsoft is using hardware to secure firmware

microsoft-secured-core-pcs.jpg
"Given the increase in firmware attacks we've seen in the last three years alone, the goal was to remove firmware as a trusted component of the boot process, so we're preventing these kinds of advanced firmware attacks," Dave Weston, director of OS security at Microsoft, told TechRepublic. The first line of the Windows boot loader on Secured-core PCs puts the CPU into a new security state where, instead of accepting the measurements made during Secure Boot, even though they're in the TPM, it goes back and revalidates the measurement. If they don't match, the PC doesn't boot and goes into BitLocker recovery mode instead. If you're managing the PC via Intune, it also sends a signal to the service that the device can't be trusted and shouldn't be allowed to connect to your network. "These PCs use the latest silicon from AMD, Intel, and Qualcomm that have the Trusted Platform Module 2.0 and Dynamic Root of Trust (DRTM) built in. The root of trust is a set of functions in the trusted computing module that is always trusted by a computer's OS and embedded in the device," Weston explains.



Not a single investment deal worth $100 million or more has been signed with an all-women team over the past four years, and only 7% of such deals went to mixed teams in 2019.  That's still a slight improvement on the previous year, when every single mega-round went to teams led exclusively by men. Sarah Nöckel, investment associate at VC firm Dawn Capital, told ZDNet: "Europe is lagging behind on diversity. In general, there is still an ongoing unconscious bias towards women. There needs to be a lot more education to change mentalities." The issue is not that women are absent from the tech space. Out of 1,200 European tech founders that were surveyed in the report, nearly a quarter identified as women.  As it dug further, the report also found that women and men are almost equally qualified for science and engineering careers. In fact in some countries, like Lithuania, the number of women who are scientists and engineers surpasses that of men. Women can and do found tech companies, therefore; the problem is rather that they then struggle to secure enough capital to develop their projects.


"Security campaigns do not work," says infosec professor Adam Joinson


The researchers' conclusions are based on a case study they performed with a large engineering services firm, based in the UK and employing more than 30,000 people. They found that - "whether we were talking to security practitioners or whether we were talking to employees" - security was not seen as something that supported the business; instead, it was perceived as a block. "In fact, they would see it as almost an adversary of employees," trying to catch and sanction workers for security breaches. One of the reasons for this was a misalignment between security policies and processes, and the lack of tools provided for employees to do their jobs. As part of an engineering firm, employees often had to deal with "massive" files from architects and similar, but the company limited emails to a 15MB attachment limit and did not allow workers use USB sticks. Cloud storage, in one particular case, was banned by a client's security policies. "Effectively, security stopped them from doing the core function of their role."



Quote for the day:


"Don't necessarily avoid sharp edges. Occasionally they are necessary to leadership." -- Donald Rumsfeld