Showing posts with label machine intelligence. Show all posts
Showing posts with label machine intelligence. Show all posts

Daily Tech Digest - August 02, 2025


Quote for the day:

"Successful leaders see the opportunities in every difficulty rather than the difficulty in every opportunity" -- Reed Markham


Chief AI role gains traction as firms seek to turn pilots into profits

CAIOs understand the strategic importance of their role, with 72% saying their organizations risk falling behind without AI impact measurement. Nevertheless, 68% said they initiate AI projects even if they can’t assess their impact, acknowledging that the most promising AI opportunities are often the most difficult to measure. Also, some of the most difficult AI-related tasks an organization must tackle rated low on CAIOs’ priority lists, including measuring the success of AI investments, obtaining funding and ensuring compliance with AI ethics and governance. The study’s authors didn’t suggest a reason for this disconnect. ... Though CEO sponsorship is critical, the authors also stressed the importance of close collaboration across the C-suite. Chief operating officers need to redesign workflows to integrate AI into operations while managing risk and ensuring quality. Tech leaders need to ensure that the technical stack is AI-ready, build modern data architectures and co-create governance frameworks. Chief human resource officers need to integrate AI into HR processes, foster AI literacy, redesign roles and foster an innovation culture. The study found that the factors that separate high-performing CAIOs from their peers are measurement, teamwork and authority. Successful projects address high-impact areas like revenue growth, profit, customer satisfaction and employee productivity.


Mind the overconfidence gap: CISOs and staff don’t see eye to eye on security posture

“Executives typically rely on high-level reports and dashboards, whereas frontline practitioners see the day-to-day challenges, such as limitations in coverage, legacy systems, and alert fatigue — issues that rarely make it into boardroom discussions,” she says. “This disconnect can lead to a false sense of security at the top, causing underinvestment in areas such as secure development, threat modeling, or technical skills.” ... Moreover, the CISO’s rise in prominence and repositioning for business leadership may also be adding to the disconnect, according to Adam Seamons, information security manager at GRC International Group. “Many CISOs have shifted from being technical leads to business leaders. The problem is that in doing so, they can become distanced from the operational detail,” Seamons says. “This creates a kind of ‘translation gap’ between what executives think is happening and what’s actually going on at the coalface.” ... Without a consistent, shared view of risk and posture, strategy becomes fragmented, leading to a slowdown in decision-making or over- or under-investment in specific areas, which in turn create blind spots that adversaries can exploit. “Bridging this gap starts with improving the way security data is communicated and contextualized,” Forescout’s Ferguson advises. 


7 tips for a more effective multicloud strategy

For enterprises using dozens of cloud services from multiple providers, the level of complexity can quickly get out of hand, leading to chaos, runaway costs, and other issues. Managing this complexity needs to be a key part of any multicloud strategy. “Managing multiple clouds is inherently complex, so unified management and governance are crucial,” says Randy Armknecht, a managing director and global cloud practice leader at business advisory firm Protiviti. “Standardizing processes and tools across providers prevents chaos and maintains consistency,” Armknecht says. Cloud-native application protection platforms (CNAPP) — comprehensive security solutions that protect cloud-native applications from development to runtime — “provide foundational control enforcement and observability across providers,” he says. ... Protecting data in multicloud environments involves managing disparate APIs, configurations, and compliance requirements across vendors, Gibbons says. “Unlike single-cloud environments, multicloud increases the attack surface and requires abstraction layers [to] harmonize controls and visibility across platforms,” he says. Security needs to be uniform across all cloud services in use, Armknecht adds. “Centralizing identity and access management and enforcing strong data protection policies are essential to close gaps that attackers or compliance auditors could exploit,” he says.


Building Reproducible ML Systems with Apache Iceberg and SparkSQL: Open Source Foundations

Data lakes were designed for a world where analytics required running batch reports and maybe some ETL jobs. The emphasis was on storage scalability, not transactional integrity. That worked fine when your biggest concern was generating quarterly reports. But ML is different. ... Poor data foundations create costs that don't show up in any budget line item. Your data scientists spend most of their time wrestling with data instead of improving models. I've seen studies suggesting sixty to eighty percent of their time goes to data wrangling. That's... not optimal. When something goes wrong in production – and it will – debugging becomes an archaeology expedition. Which data version was the model trained on? What changed between then and now? Was there a schema modification that nobody documented? These questions can take weeks to answer, assuming you can answer them at all. ... Iceberg's hidden partitioning is particularly nice because it maintains partition structures automatically without requiring explicit partition columns in your queries. Write simpler SQL, get the same performance benefits. But don't go crazy with partitioning. I've seen teams create thousands of tiny partitions thinking it will improve performance, only to discover that metadata overhead kills query planning. Keep partitions reasonably sized (think hundreds of megabytes to gigabytes) and monitor your partition statistics.


The Creativity Paradox of Generative AI

Before talking about AI creation ability, we need to understand a simple linguistic limitation: despite the data used for these compositions having human meanings initially, i.e., being seen as information, after being de- and recomposed in a new, unknown way, these compositions do not have human interpretation, at least for a while, i.e., they do not form information. Moreover, these combinations cannot define new needs but rather offer previously unknown propositions to the specified tasks. ... Propagandists of know-it-all AI have a theoretical basis defined in the ethical principles that such an AI should realise and promote. Regardless of how progressive they sound, their core is about neo-Marxist concepts of plurality and solidarity. Plurality states that the majority of people – all versus you – is always right (while in human history it is usually wrong), i.e., if an AI tells you that your need is already resolved in the way that the AI articulated, you have to agree with it. Solidarity is, in essence, a prohibition of individual opinions and disagreements, even just slight ones, with the opinion of others; i.e., everyone must demonstrate solidarity with all. ... The know-it-all AI continuously challenges a necessity in the people’s creativity. The Big AI Brothers think for them, decide for them, and resolve all needs; the only thing that is required in return is to obey the Big AI Brother directives.


Doing More With Your Existing Kafka

The transformation into a real-time business isn’t just a technical shift, it’s a strategic one. According to MIT’s Center for Information Systems Research (CISR), companies in the top quartile of real-time business maturity report 62% higher revenue growth and 97% higher profit margins than those in the bottom quartile. These organizations use real-time data not only to power systems but to inform decisions, personalize customer experiences and streamline operations. ... When event streams are discoverable, secure and easy to consume, they are more likely to become strategic assets. For example, a Kafka topic tracking payment events could be exposed as a self-service API for internal analytics teams, customer-facing dashboards or third-party partners. This unlocks faster time to value for new applications, enables better reuse of existing data infrastructure, boosts developer productivity and helps organizations meet compliance requirements more easily. ... Event gateways offer a practical and powerful way to close the gap between infrastructure and innovation. They make it possible for developers and business teams alike to build on top of real-time data, securely, efficiently and at scale. As more organizations move toward AI-driven and event-based architectures, turning Kafka into an accessible and governable part of your API strategy may be one of the highest-leverage steps you can take, not just for IT, but for the entire business.


Meta-Learning: The Key to Models That Can "Learn to Learn"

Meta-learning is a field within machine learning that focuses on algorithms capable of learning how to learn. In traditional machine learning, an algorithm is trained on a specific dataset and becomes specialized for that task. In contrast, meta-learning models are designed to generalize across tasks, learning the underlying principles that allow them to quickly adapt to new, unseen tasks with minimal data. The idea is to make machine learning systems more like humans — able to leverage prior knowledge when facing new challenges. ... This is where meta-learning shines. By training models to adapt to new situations with few examples, we move closer to creating systems that can handle the diverse, dynamic environments found in the real world. ... Meta-learning represents the next frontier in machine learning, enabling models that are adaptable and capable of generalizing across a wide range of tasks with minimal data. By making machines more capable of learning from fewer examples, meta-learning has the potential to revolutionize fields like healthcare, robotics, finance, and more. While there are still challenges to overcome, the ongoing advancements in meta-learning techniques, such as few-shot learning, transfer learning, and neural architecture search, are making it an exciting area of research with vast potential for practical applications.


US govt, Big Tech unite to build one stop national health data platform

Under this framework, applications must support identity-proofing standards, consent management protocols, and Fast Healthcare Interoperability Resources (FHIR)-based APIs that allow for real-time retrieval of medical data across participating systems. The goal, according to CMS Administrator Chiquita Brooks-LaSure, is to create a “unified digital front door” to a patient’s health records that are accessible from any location, through any participating app, at any time. This unprecedented public-private initiative builds on rules first established under the 2016 21st Century Cures Act and expanded by the CMS Interoperability and Patient Access Final Rule. This rule mandates that CMS-regulated payers such as Medicare Advantage organizations, Medicaid programs, and Affordable Care Act (ACA)-qualified health plans make their claims, encounter data, lab results, provider remittances, and explanations of benefits accessible through patient-authorized APIs. ... ID.me, another key identity verification provider participating in the CMS initiative, has also positioned itself as foundational to the interoperability framework. The company touts its IAL2/AAL2-compliant digital identity wallet as a gateway to streamlined healthcare access. Through one-time verification, users can access a range of services across providers and government agencies without repeatedly proving their identity.


What Is Data Literacy and Why Does It Matter?

Building data literacy in an organization is a long-term project, often spearheaded by the chief data officer (CDO) or another executive who has a vision for instilling a culture of data in their company. In a report from the MIT Sloan School of Management, experts noted that to establish data literacy in a company, it’s important to first establish a common language so everyone understands and agrees on the definition of commonly used terms. Second, management should build a culture of learning and offer a variety of modes of training to suit different learning styles, such as workshops and self-led courses. Finally, the report noted that it’s critical to reward curiosity – if employees feel they’ll get punished if their data analysis reveals a weakness in the company’s business strategy, they’ll be more likely to hide data or just ignore it. Donna Burbank, an industry thought leader and the managing director of Global Data Strategy, discussed different ways to build data literacy at DATAVERSITY’s Data Architecture Online conference in 2021. ... Focusing on data literacy will help organizations empower their employees, giving them the knowledge and skills necessary to feel confident that they can use data to drive business decisions. As MIT senior lecturer Miro Kazakoff said in 2021: “In a world of more data, the companies with more data-literate people are the ones that are going to win.”


LLMs' AI-Generated Code Remains Wildly Insecure

In the past two years, developers' use of LLMs for code generation has exploded, with two surveys finding that nearly three-quarters of developers have used AI code generation for open source projects, and 97% of developers in Brazil, Germany, and India are using LLMs as well. And when non-developers use LLMs to generate code without having expertise — so-called "vibe coding" — the danger of security vulnerabilities surviving into production code dramatically increases. Companies need to figure out how to secure their code because AI-assisted development will only become more popular, says Casey Ellis, founder at Bugcrowd, a provider of crowdsourced security services. ... Veracode created an analysis pipeline for the most popular LLMs (declining to specify in the report which ones they tested), evaluating each version to gain data on how their ability to create code has evolved over time. More than 80 coding tasks were given to each AI chatbot, and the subsequent code was analyzed. While the earliest LLMs tested — versions released in the first half of 2023 — produced code that did not compile, 95% of the updated versions released in the past year produced code that passed syntax checking. On the other hand, the security of the code has not improved much at all, with about half of the code generated by LLMs having a detectable OWASP Top-10 security vulnerability, according to Veracode.

Daily Tech Digest - November 02, 2023

How Banks Can Turn Risk Into Reward Through Data Governance

To understand why data governance is critical for banks, we must understand the underlying challenges facing financial services organizations as they modernize. Rolling out new cloud applications or Internet of Things (IoT) devices into an environment where legacy on-premises systems are already in place means more data silos and data sets to manage. Often, this results in data volumes, variety, and velocity increasing much too quickly for banks. This gives rise to IT complexity—driven by technical debt or the reliance on systems cobbled together and one-off connections. Not only that, it also raises the specter of 'shadow IT' as employees look for workarounds to friction in executing tasks. This can create difficulties for banks trying to identify and manage their data assets in a consistent, enterprise-wide way that is aligned with business strategy. Ultimately, barely controlled data leads to errant financial reporting, data privacy breaches, and non-compliance with consumer data regulations. Failing to counter these risks can lead to fines, hurt brand image, and trigger lost sales. 


Key Considerations for Developing Organizational Generative AI Policies

It's crucial to ensure that all relevant stakeholders have a voice in the process, both to make the policy comprehensive and actionable and to ensure adherence to legal and ethical standards. The breadth and depth of stakeholders involved will depend on the organizational context, such as, regulatory/legal requirements, the scope of AI usage and the potential risks associated (e.g., ethics, bias, misinformation). Stakeholders offer technical expertise, ensure ethical alignment, provide legal compliance checks, offer practical operational feedback, collaboratively assess risks, and jointly define and enforce guiding principles for AI use within the organization. Key stakeholders—ranging from executive leadership, legal teams and technical experts to communication teams, risk management/compliance and business group representatives—play crucial roles in shaping, refining and implementing the policy. Their contributions ensure legal compliance, technical feasibility and alignment with business and societal values.x


CIOs sharpen cloud cost strategies — just as gen AI spikes loom

One key skill CIOs are honing to lower costs is their ability to negotiate with cloud providers, said one CIO who declined to be named. “People better understand the charges, and [they] better negotiate costs. After being in cloud and leveraging it better, we are able to manage compute and storage better ourselves,” said the CIO, who notes that vendors are not cutting costs on licenses or capacity but are offering more guidance and tools. “After some time, people have understood the storage needs better based on usage and preventing data extract fees.” Thomas Phelps, CIO and SVP of corporate strategy at Laserfiche, says cloud contracts typically include several “gotchas” that IT leaders and procurement chiefs should be aware of, and he stresses the importance of studying terms of use before signing. ... CIOs may also fall into the trap of misunderstanding product mixes and the downside of auto-renewals, he adds. “I often ask vendors to walk me through their product quote and explain what each product SKU or line item is, such as the cost for an application with the microservices and containerization,” Phelps says. 


Misdirection for a Price: Malicious Link-Shortening Services

Security researchers gave the service the codename "Prolific Puma." They discovered it by identifying patterns in links being used by some scammers and phishers that appeared to trace to a common source. The service appears to be have active since at least 2020 and regularly is used to route victims to malicious domains, sometimes first via other link-shortening service URLs. "Prolific Puma is not the only illicit link shortening service that we have discovered, but it is the largest and the most dynamic," said Renee Burton, senior director of threat intelligence for Infoblox, in a new report on the cybercrime service. "We have not found any legitimate content served through their shortener." Infoblox, a Santa Clara, California-based IT automation and security company, published a list of 60 URLs it has tied to Prolific Puma's attacks. The URLS employ such domains as hygmi.com, yyds.is, 0cq.us, 4cu.us and regz.information. Infoblox said many domains registered by the group are parked for several weeks while being used, since many reputation-based security defenses will treat freshly registered domains as more likely to be malicious.


DNS security poses problems for enterprise IT

EMA asked research participants to identify the DNS security challenges that cause them the most pain. The top response (28% of all respondents) is DNS hijacking. Also known as DNS redirection, this process involves intercepting DNS queries from client devices so that connection attempts go to the wrong IP address. Hackers often achieve this buy infecting clients with malware so that queries go to a rogue DNS server, or they hack a legitimate DNS server and hijacks queries as more massive scale. The latter method can have a large blast radius, making it critical for enterprises to protect DNS infrastructure from hackers. The second most concerning DNS security issue is DNS tunneling and exfiltration (20%). Hackers typically exploit this issue once they have already penetrated a network. DNS tunneling is used to evade detection while extracting data from a compromised. Hackers hide extracted data in outgoing DNS queries. Thus, it’s important for security monitoring tools to closely watch DNS traffic for anomalies, like abnormally large packet sizes. The third most pressing security concern is a DNS amplification attack (20%). 


Data governance that works

Once we've found our targeted business initiatives and the data is ready to meet the needs of those initiatives, there are three major governance pillars we want to address for that data: understand, curate, and protect. First, we want to understand the data. That means having a catalog of data that we can analyze and explain. We need to be able to profile the data, to look for anomalies, to understand the lineage of that data, and so on. We also want to curate the data, or make it ready for our particular initiatives. We want to be able to manage the quality of the data, integrate it from a variety of sources across domains, and so on. And we want to protect the data, making sure we comply with regulations and manage the life cycle of the data as it ages. More importantly, we need to enable the right people to get to the right data when they need it. AWS has tools, including Amazon DataZone and AWS Glue, to help companies do all of this. It's really tempting to attack these issues one by one and to support each individually. But in each pillar, there are so many possible actions that we can take. This is why it's better to work backwards from business initiatives.


EU digital ID reforms should be ‘actively resisted’, say experts

The group’s concerns over the amendments largely centre on Article 45 of the reformed eIDAS, where it says the text “radically expands the ability of governments to surveil both their own citizens and residents across the EU by providing them with the technical means to intercept encrypted web traffic, as well as undermining the existing oversight mechanisms relied on by European citizens”. “This clause came as a surprise because it wasn’t about governing identities and legally binding contracts, it was about web browsers, and that was what triggered our concern,” explained Murdoch. ... All websites today are authenticated by root certificates controlled by certificate authorities, which assure the user that the cryptographic keys used to authenticate the website content belong to the website. The certificate owner can intercept a user’s web traffic by replacing these cryptographic keys with ones they control, even if the website has chosen to use a different certificate authority with a different certificate. There are multiple cases of this mechanism having been abused in reality, and legislation to govern certificate authorities does exist and, by and large, has worked well.


The key to success is to think beyond the obvious, to innovate and look for solutions

AI systems, including machine learning models, make critical decisions and recommendations. Ensuring the accuracy and reliability of these AI models is paramount. AI heavily relies on data and ensuring data quality, integrity, and consistency is a crucial task. Data pre-processing and validation are necessary steps to make AI models work effectively. Integration of software testing in the software development life cycle helps identify and rectify issues that could lead to incorrect predictions or decisions, minimizing the risks associated with AI tools. AI models are susceptible to adversarial attacks and robust security testing helps identify vulnerabilities and weaknesses in AI systems, protecting them from cyber threats and ensuring the safety of automated processes. Testing is not a one-time effort; it’s an ongoing process. Regular testing and monitoring are necessary to identify issues that may arise as AI models and automated systems evolve. High-quality, well-tested AI-driven automation can provide a competitive advantage.


We built a ‘brain’ from tiny silver wires.

We are working on a completely new approach to “machine intelligence”. Instead of using artificial neural network software, we have developed a physical neural network in hardware that operates much more efficiently. ... Using nanotechnology, we made networks of silver nanowires about one thousandth the width of a human hair. These nanowires naturally form a random network, much like the pile of sticks in a game of pick-up sticks. The nanowires’ network structure looks a lot like the network of neurons in our brains. Our research is part of a field called neuromorphic computing, which aims to emulate the brain-like functionality of neurons and synapses in hardware. Our nanowire networks display brain-like behaviours in response to electrical signals. External electrical signals cause changes in how electricity is transmitted at the points where nanowires intersect, which is similar to how biological synapses work. There can be tens of thousands of synapse-like intersections in a typical nanowire network, which means the network can efficiently process and transmit information carried by electrical signals.


Why public/private cooperation is the best bet to protect people on the internet

Neither the FTC nor the SEC was empowered by Congress with responsibility for cyberspace, and both have relied on pre-existing authorities related to corporate representations to bring actions against individuals who did not have corporate duties managing legal or external communications. They are using the tools at their disposal to change expectations, even if it means bringing a bazooka to a knife fight. These cases make CISOs worried that in addition to being technical experts they also need to personally become experts on data breach disclosure laws and experts on SEC reporting requirements rather than trusting their peers in the legal and communications departments of their organizations. What we need is a real partnership between the public and the private sector, clear rules and expectations for IT professionals and law enforcement, and an executive branch that will attempt regulation through rulemaking rather than through ugly and costly enforcement actions that target IT professionals for doing their jobs and further deepens the adversarial public-private divide.



Quote for the day:

"Leadership is working with goals and vision; management is working with objectives." -- Russel Honore

Daily Tech Digest - August 03, 2022

Why the future of APIs must include zero trust

Devops leaders are pressured to deliver digital transformation projects on time and under budget while developing and fine-tuning APIs at the same time. Unfortunately, API management and security are an afterthought when the devops teams rush to finish projects on deadline. As a result, API sprawl happens fast, multiplying when all devops teams in an enterprise don’t have the API Management tools and security they need. More devops teams require a solid, scalable methodology to limit API sprawl and provide the least privileged access to them. In addition, devops teams need to move API management to a zero-trust framework to help reduce the skyrocketing number of breaches happening today. The recent webinar sponsored by Cequence Security and Forrester, Six Stages Required for API Protection, hosted by Ameya Talwalkar, founder and CEO and guest speaker Sandy Carielli, Principal Analyst at Forrester, provide valuable insights into how devops teams can protect APIs. In addition, their discussion highlights how devops teams can improve API management and security.


India withdraws personal data protection bill that alarmed tech giants

The move comes as a surprise as lawmakers had indicated recently that the bill, unveiled in 2019, could see the “light of the day” soon. New Delhi received dozen of amendments and recommendations from a Joint Committee of Parliament that “identified many issues that were relevant but beyond the scope of a modern digital privacy law,” said India’s Junior IT Minister Rajeev Chandrasekhar. The government will now work on a “comprehensive legal framework” and present a new bill, he added. ... “The Personal Data Protection Bill, 2019 was deliberated in great detail by the Joint Committee of Parliament 81 amendments were proposed and 12 recommendations were made towards comprehensive legal framework on digital ecosystem. Considering the report of the JCP, a comprehensive legal framework is being worked upon. Hence, in the circumstances, it is proposed to withdraw. The Personal Data Protection Bill, 2019′ and present a new bill that fits into the comprehensive legal framework,” India’s IT Minister Ashwini Vaishnaw said in a written statement Wednesday.


Don't overengineer your cloud architecture

A recent Deloitte study uncovered some interesting facts about cloud computing budgets. You would think budgets would make a core difference in how businesses leverage cloud computing effectively, but they are not good indicators to predict success. Although this could indicate many things, I suspect that money is not correlated to value with cloud computing. In many instances, this may be due to the design and deployment of overly complex cloud solutions when simpler and more cost-effective approaches would work better to get to the optimized value that most businesses seek. If you ask the engineers why they designed the solution this way (whether overengineered or not), they will defend their approach around some reason or purpose that nobody understands but them. ... This is a systemic problem now, which has arisen because we have very few qualified cloud architects out there. Enterprises are settling for someone who may have passed a vendor’s architecture certification, which only makes them proficient in a very narrow grouping of technology and often doesn’t consider the big picture.


Leveraging data privacy by design

Privacy laws and regulations, therefore, can include guidelines for facilitating industry standards, benchmarks for privacy enhancing technologies and funding privacy by design research to incentivise technology designers to enhance privacy safeguard measures in the product designs; thereby promoting technological models that are privacy savvy. The above can be better understood from the following example. For instance, the price paid for a helmet by a motorbike rider is compliance cost as it is an additional purchase requirement for safety over and above his immediate need for using a bike as a tool for commutation. However, a seat belt that is subsumed as a component of a car and not an additional requirement is perceived differently by the owner. Thus, compliance requirements that are perceived as additional obligations result in the perception of increased compliance costs whereas compliance requirements embedded in the design of the product itself are considered as part of the total product price and not separate costs. Privacy by design can thus prompt a shift in a business model whereby through the incorporation of privacy features within the technological design of the product itself


Is it bad to give employees too many tech options?

The most important question in developing (or expanding) an employee-choice model is determining how much choice to allow. Offer too little and you risk undermining the effort's benefits. Offer too much and you risk a level of tech anarchy that can be as problematic as unfettered shadow IT. There isn’t a one-size-fits-all approach. Every organization has unique culture, requirements/expectations, and management capabilities. An approach that works in a marketing firm would differ from a healthcare provider, and a government agency would need a different approach than a startup. Options also vary depending on the devices employees use — desktop computing and mobile often require differing approaches, particularly for companies that employ a BYOD program for smartphones. ... Google is making a play for the enterprise by offering ChromeOS Flex, which turns aging PCs and Macs into Chromebooks. This allows companies to continue to use machines that have dated or limited hardware, but it also means adding support for ChromeOS devices. 


Patterns and Frameworks - What's wrong?

Many people say that we should prefer libraries to frameworks and I must say that might be true. If a library could do the job you need (for example, the communication between a client and a server I presented at the beginning of the article) and meets the performance, security, protocols and any other requirements your service needs to support, then the fact we can have a "Framework" automate some class generations for us might be of minor importance, especially if such a Framework will not be able to deal with the application classes and would force us to keep creating new patterns just to convert object types. ... Yet, they fall short when dealing with app specific types and force us to either change our types just to be able to work with the framework or, when two or more frameworks are involved, there's no way out and we need to create alternative classes and copy data back and forth, doing the necessary conversions, which completely defeats the purpose of having the transparent proxies.


Where are all the technologists? Talent shortages and what to do about them

Instead of looking for that complete match, shift to 80% instead – the other 20% can almost always be met through training, support and development once in the job. Another flexibility is around age. The most sought-after candidates are in the 35-49 age bracket. But don’t rule out the under-35s or the over-50s. There are brilliant people in both groups – one with all the potential for the future, the other with invaluable experience and work knowhow. This brings us to another absolutely key approach: to invest in training and upskilling. I have one client who is looking ahead and can see that they will have a significant software development skills requirement in about four years’ time. So they are training their existing software engineers now, so they can move into these roles when the time comes. There is a growing emphasis among digital leaders on increasing the amount of internal cross-training into tech. This is something that can be applied externally, too. Look outside the business for talent that can be supported into a tech career – people who may be in other fields right now but have the right aptitude, mindset and ambition.


We’re Spending Billions Each Year on Cybersecurity. So Why Aren’t Data Breaches Going Away?

As companies invest heavily in technology, communication, and training to reduce cybersecurity risk and as they begin seeing the positive impact of those efforts, they may let their guard down—not paying as much attention to the risks, not communicating as often, or failing to ensure that new employees (or employees in new positions) are receiving the information and training they need. Cybercrooks only need to be successful once to achieve their goals, but companies need to be successful 100% of the time to avoid being compromised. Consider this: security is subject to the same natural laws that govern the rest of the universe. Entropy is real… we move from order to chaos. ... A strong security culture is a must-have to combat the continuous threats that all companies are subject to. Employees’ security awareness, behaviors and the organization’s culture must be assessed regularly. Policies and training programs should be consistently updated to address the changing threat landscape. Failure to do so puts companies at risk of data theft, business interruption, or falling victim to ransomware scams.


What is supervised machine learning?

A common process involves hiring a large number of humans to label a large dataset. Organizing this group is often more work than running the algorithms. Some companies specialize in the process and maintain networks of freelancers or employees who can code datasets. Many of the large models for image classification and recognition rely upon these labels. Some companies have found indirect mechanisms for capturing the labels. Some websites, for instance, want to know if their users are humans or automated bots. One way to test this is to put up a collection of images and ask the user to search for particular items, like a pedestrian or a stop sign. The algorithms may show the same image to several users and then look for consistency. When a user agrees with previous users, that user is presumed to be a human. The same data is then saved and used to train ML algorithms to search for pedestrians or stop signs, a common job for autonomous vehicles. Some algorithms use subject-matter experts and ask them to review outlying data. Instead of classifying all images, it works with the most extreme values and extrapolates rules from them.


Machine learning creates a new attack surface requiring specialized defenses

While all adversarial machine learning attack types need to be defended against, different organizations will have different priorities. Financial institutions leveraging machine learning models to identify fraudulent transactions are going to be highly focused on defending against inference attacks. If an attacker understands the strengths and weaknesses of a fraud detection system, they can use that to alter their techniques to go undetected, bypassing the model altogether. Healthcare organizations could be more sensitive to data poisoning. The medical field has been some of the earliest adopters of using their massive historical data sets to predict outcomes with machine learning. Data poisoning attacks can lead to misdiagnosis, alter results of drug trials, misrepresent patient populations and more. Security organizations themselves are presently focusing on machine learning bypass attacks that are actively being used to deploy ransomware or backdoor networks. ... The best advice I can give to a CISO today is to embrace patterns we’ve already learned on emerging technologies.



Quote for the day:

"There are three secrets to managing. The first secret is have patience. The second is be patient. And the third most important secret is patience." -- Chuck Tanner

Daily Tech Digest - November 21, 2021

Ransomware Phishing Emails Sneak Through SEGs

The original email purported to need support for a “DWG following Supplies List,” which is supposedly hyperlinked to a Google Drive URL. The URL is actually an infection link, which downloaded an .MHT file. “.MHT file extensions are commonly used by web browsers as a webpage archive,” Cofense researchers explained. “After opening the file the target is presented with a blurred out and apparently stamped form, but the threat actor is using the .MHT file to reach out to the malware payload.” That payload comes in the form of a downloaded .RAR file, which in turn contains an .EXE file. “The executable is a DotNETLoader that uses VBS scripts to drop and run the MIRCOP ransomware in memory,” according to the analysis. ... “Its opening lure is business-themed, making use of a service – such as Google Drive – that enterprises employ for delivering files,” the researchers explained. “The rapid deployment from the MHT payload to final encryption shows that this group is not concerned with being sneaky. Since the delivery of this ransomware is so simple, it is especially worrying that this email found its way into the inbox of an environment using a SEG.”


How Decentralized Finance Will Impact Business Financial Services

In essence, DeFi aims to provide a worldwide, decentralized alternative to every financial service now available, such as insurance, savings, and loans. DeFi’s primary goal is to offer financial services to the world’s 1.7 billion unbanked individuals. And this is possible because DeFi is borderless. These financial services are available to anybody with a smartphone and internet connection in any part of the world. For the impoverished and unbanked, this will revolutionize banking. They can invest anywhere in the world in anything with just the touch of a button. By providing open access for all, DeFi empowers individuals and businesses to maintain greater control over their assets and gives them the financial freedom to select how to invest their money without relying on any intermediary. DeFi is also censorship-resistant, making it immune from government intervention. Furthermore, sending money across borders is extremely costly under the existing system. DeFi eliminates the need for costly intermediaries, allowing for better interest rates and lower expenses, while also democratizing banking systems.


Addressing the Low-Code Security Elephant in the Room

What are some development choices about the application layer that affect the security responsibility? If the low-code application is strictly made up of low-code platform native capabilities or services, you only have to worry about the basics. That includes application design and business logic flaws, securing your data in transit and at rest, security misconfigurations, authentication, authorizing and adhering to the principle of least-privilege, providing security training for your citizen developers, and maintaining a secure deployment environment. These are the same elements any developer — low-code or traditional — would need to think about in order to secure the application. Everything else is handled by the low-code platform itself. That is as basic as it gets. But what if you are making use of additional widgets, components, or connectors provided by the low-code platform? Those components — and the code used to build them — are definitely out of your jurisdiction of responsibility. You may need to consider how they are configured or used in your application, though.


Google Introduces ClusterFuzzLite Security Tool for CI/CD

ClusterFuzzLite enables you to run continuous fuzzing on your Continuous integration and delivery (CI/CD) pipeline. The result? You’ll find vulnerabilities more easily and faster than ever before. This is vital. A 2020 GitLab DevSecOps survey found that, while 81% of developers believed fuzz testing is important, only 36% were actually using fuzzing. Why? Because it was too much trouble to set fuzzing up and integrate it with their CI/CD systems. At the same time, though, as Shuah Khan, kernel maintainer and the Linux Foundation’s third Linux Fellow, has pointed out “It is easier to detect and fix problems during the development process,” than it is to wait for manual testing or quality assurance later in the game. By feeding unexpected or random data into a program, fuzzing catches bugs that would otherwise slip past the most careful eyeballs. NIST’s guidelines for software verification specify fuzzing as a minimum standard requirement for code verification. After all as Dan Lorenc, founder and CEO of Chainguard and former Google open source security team software engineer, recently told The New Stack, 


Bitcoin Is How We Really Build A New Financial System

When it comes to a foundational sound money, Bitcoin is unmatched. Compared to other blockchain assets, Bitcoin has had an immaculate conception.Also, Bitcoin has an elegantly simple monetary policy and an immutable supply freed from human discretion – something no other cryptocurrency asset can provide. Bitcoin's monetary policy is based on algorithmically-determined parameters and is thus perfectly predictable, rule-based and neither event- nor emotion-driven. By depoliticizing monetary policy and entrusting money creation to the market according to rule-based parameters, Bitcoin’s monetary asset behaves as neutrally as possible. Bitcoin is truly sound money since it provides the highest degree of stability, reliability and security. Most crypto enthusiasts would probably object that while Bitcoin might be the soundest money, its technical capabilities do not allow for DeFi to be built on top of it. As a matter of fact though, nothing could be further from the truth. 


A Simple 5-Step Framework to Minimize the Risk of a Data Breach

The first step businesses need to take to increase the security of their customer data is to review what types of data they're collecting and why. Most companies that undertake this exercise end up surprised by what they find. That's because, over time, the volume and variety of customer information that gets collected to expand well beyond a business's original intent. For example, it's fairly standard to collect things like a customer's name and email address. And if that's all a business has on file, they won't be an attractive target to an attacker. But if the business has a cloud call center or any type of high touch sales cycle or customer support it probably collects home addresses, financial data, and demographic information, they've then assembled a collection that's perfect for enabling identity theft if the data got out into the wild. So, when evaluating each collected data point to determine its value, businesses should ask themselves: what critical business function does this data facilitate. If the answer is none, they should purge the data and stop collecting it. 


To Monitor or Not to Monitor a Model — Is there a question?

Evidently AI works by analyzing the training and production datasets. It maps the data from features in the training data to their counterparts in the production data. ... Thereafter it runs different statistical tests depending on the input. Evidently AI then creates graphs that are based on the plotly python library, and you can read more about the code in their open-source GitHub repository. For binary categorical features, it performs a simple Z-test for a difference in proportions to verify if there is a statistically significant difference in how often the training and production data have one of the two values for the binary variable. For multivariate categorical features, it performs a chi-squared test, which aims to see if the distribution of the variable in the production data is likely based on the distribution in the training data. Finally, for numeric features, it performs a two-sample Kolmogorov-Smirnov test for goodness of fit that assesses the distributions of the feature in the training and production data to see if they are likely to be the same distribution, or if they vary from each other significantly.


IBM’s latest quantum chip breaks the elusive 100-qubit barrier

The Eagle is a quantum processor that is around the size of a quarter. Unlike regular computer chips, which encode information as 0 or 1 bits, quantum computers can represent information in something called qubits, which can have a value of 0, 1, or both at the same time due to a unique property called superposition. By holding over 100 qubits in a single chip, IBM says that Eagle could increase the “memory space required to execute algorithms,” which would in theory help quantum computers take on more complex problems. “People have been excited about the prospects of quantum computers for many decades because we have understood that there are algorithms or procedures you can run on these machines that you can’t run on conventional or classical computers,” says David Gosset, an associate professor at the University of Waterloo’s Institute for Quantum Computing who works on research with IBM, “which can accelerate the solution of certain, specific problems.”


Industrial computer vision is getting ready for growth

Industrial applications, however, present some unique challenges for computer vision systems. Many organizations can’t use pretrained machine learning models that have been tuned to publicly available data. They need models that are trained on their specific data. Sometimes, those organizations don’t have enough data to train their ML models from scratch, so they need to go through some more complicated processes, such as pretraining the model on a general dataset and then finetuning it on their own labeled images. The challenges of industrial computer vision are not limited to data. Sometimes, sensitivities such as safety or transparency impose special requirements on the type of algorithm and accuracy metrics used in industrial computer vision systems. And the team running the model needs an entire MLOps stack to monitor model performance, iterate across models, maintain different versions of the models, and manage a pipeline for gathering new data and retraining the models.


Three Big Myths About Decentralized Finance

Because the blockchain uses so many distinct sources to verify and record what happens within the system, there is also a common misconception that decentralized finance is inherently safer than centralized systems run by a single financial institution. After all, if thousands of sources check my transactions, won't they be able to identify and prevent anyone trying to use my account without my permission? Not necessarily. While it's true that the blockchain does help to safeguard against administrative or accounting errors — as happened recently with one family who mistakenly received $50 billion in their account — it also removes the safeguards that centralized financial businesses provide. Most of today's largest financial institutions have been around for decades. Over the years, federal and industry regulation have been put in place to provide safeguards against fraud. Navigating these safeguards can no doubt be tiresome, but they do provide valuable protections.




Quote for the day:

"A leadership disposition guides you to take the path of most resistance and turn it into the path of least resistance." -- Dov Seidman

Daily Tech Digest - July 28, 2021

DevOps Is Dead, Long Live AppOps

The NoOps trend aims to remove all the frictions between development and the operation simply removing it, as the name tells. This may seem a drastic solution, but we do not have to take it literally. The right interpretation — the feasible one — is to remove as much as possible the human component in the deployment and delivery phases. That approach is naturally supported by the cloud that helps things to work by themself. ... One of the most evident scenarios that explain the benefit of AppOps is every application based on Kubernetes. If you will open each cluster you will find a lot of pod/service/deployment settings that are mostly the same. In fact, every PHP application has the same configuration, except for parameters. Same for Java, .Net, or other applications. The matter is that Kubernetes is agnostic to the content of the host's applications, so he needs to inform it about every detail. We have to start from the beginning for all new applications even if the technology is the same. Why? I should explain only once how a PHP application is composed. 


Thrill-K: A Blueprint for The Next Generation of Machine Intelligence

Living organisms and computer systems alike must have instantaneous knowledge to allow for rapid response to external events. This knowledge represents a direct input-to-output function that reacts to events or sequences within a well-mastered domain. In addition, humans and advanced intelligent machines accrue and utilize broader knowledge with some additional processing. I refer to this second level as standby knowledge. Actions or outcomes based on this standby knowledge require processing and internal resolution, which makes it slower than instantaneous knowledge. However, it will be applicable to a wider range of situations. Humans and intelligent machines need to interact with vast amounts of world knowledge so that they can retrieve the information required to solve new tasks or increase standby knowledge. Whatever the scope of knowledge is within the human brain or the boundaries of an AI system, there is substantially more information outside or recently relevant that warrants retrieval. We refer to this third level as retrieved external knowledge.


GitHub’s Journey From Monolith to Microservices

Good architecture starts with modularity. The first step towards breaking up a monolith is to think about the separation of code and data based on feature functionalities. This can be done within the monolith before physically separating them in a microservices environment. It is generally a good architectural practice to make the code base more manageable. Start with the data and pay close attention to how they’re being accessed. Make sure each service owns and controls access to its own data, and that data access only happens through clearly defined API contracts. I’ve seen a lot of cases where people start by pulling out the code logic but still rely on calls into a shared database inside the monolith. This often leads to a distributed monolith scenario where it ends up being the worst of both worlds - having to manage the complexities of microservices without any of the benefits. Benefits such as being able to quickly and independently deploy a subset of features into production. Getting data separation right is a cornerstone in migrating from a monolithic architecture to microservices. 


Data Strategy vs. Data Architecture

By being abstracted from the problem solving and planning process, enterprise architects became unresponsive, he said, and “buried in the catacombs” of IT. Data Architecture needs to look at finding and putting the right mechanisms in place to support business outcomes, which could be everything from data systems and data warehouses to visualization tools. Data architects who see themselves as empowered to facilitate the practical implementation of the Business Strategy by offering whatever tools are needed will make decisions that create data value. “So now you see the data architect holding the keys to a lot of what’s happening in our organizations, because all roads lead through data.” Algmin thinks of data as energy, because stored data by itself can’t accomplish anything, and like energy, it comes with significant risks. “Data only has value when you put it to use, and if you put it to use inappropriately, you can create a huge mess,” such as a privacy breach. Like energy, it’s important to focus on how data is being used and have the right controls in place. 


Why CISA’s China Cyberattack Playbook Is Worthy of Your Attention

In the new advisory, CISA warns that the attacks will also compromise email and social media accounts to conduct social engineering attacks. A person is much more likely to click on an email and download software if it comes from a trusted source. If the attacker has access to an employee's mailbox and can read previous messages, they can tailor their phishing email to be particularly appealing – and even make it look like a response to a previous message. Unlike “private sector” criminals, state-sponsored actors are more willing to use convoluted paths to get to their final targets, said Patricia Muoio, former chief of the NSA’s Trusted System Research Group, who is now general partner at SineWave Ventures. ... Private cybercriminals look for financial gain. They steal credit card information and health care data to sell on the black market, hijack machines to mine cryptocurrencies, and deploy ransomware. State-sponsored attackers are after different things. If they plan to use your company as an attack vector to go after another target, they'll want to compromise user accounts to get at their communications. 


Breaking through data-architecture gridlock to scale AI

Organizations commonly view data-architecture transformations as “waterfall” projects. They map out every distinct phase—from building a data lake and data pipelines up to implementing data-consumption tools—and then tackle each only after completing the previous ones. In fact, in our latest global survey on data transformation, we found that nearly three-quarters of global banks are knee-deep in such an approach.However, organizations can realize results faster by taking a use-case approach. Here, leaders build and deploy a minimum viable product that delivers the specific data components required for each desired use case (Exhibit 2). They then make adjustments as needed based on user feedback. ... Legitimate business concerns over the impact any changes might have on traditional workloads can slow modernization efforts to a crawl. Companies often spend significant time comparing the risks, trade-offs, and business outputs of new and legacy technologies to prove out the new technology. However, we find that legacy solutions cannot match the business performance, cost savings, or reduced risks of modern technology, such as data lakes. 


Data-Intensive Applications Need Modern Data Infrastructure

Modern applications are data-intensive because they make use of a breadth of data in more intricate ways than anything we have seen before. They combine data about you, about your environment, about your usage and use that to predict what you need to know. They can even take action on your behalf. This is made possible because of the data made available to the app and data infrastructure that can process the data fast enough to make use of it. Analytics that used to be done in separate applications (like Excel or Tableau) are getting embedded into the application itself. This means less work for the user to discover the key insight or no work as the insight is identified by the application and simply presented to the user. This makes it easier for the user to act on the data as they go about accomplishing their tasks. To deliver this kind of application, you might think you need an array of specialized data storage systems, ones that specialize in different kinds of data. But data infrastructure sprawl brings with it a host of problems.  


The Future of Microservices? More Abstractions

A couple of other initiatives regarding Kubernetes are worth tracking. Jointly created by Microsoft and Alibaba Cloud, the Open Application Model (OAM) is a specification for describing applications that separate the application definition from the operational details of the cluster. It thereby enables application developers to focus on the key elements of their application rather than the operational details of where it deploys. Crossplane is the Kubernetes-specific implementation of the OAM. It can be used by organizations to build and operate an internal platform-as-a-service (PaaS) across a variety of infrastructures and cloud vendors, making it particularly useful in multicloud environments, such as those increasingly commonly found in large enterprises through mergers and acquisitions. Whilst OAM seeks to separate out the responsibility for deployment details from writing service code, service meshes aim to shift the responsibility for interservice communication away from individual developers via a dedicated infrastructure layer that focuses on managing the communication between services using a proxy. 


Navigating data sovereignty through complexity

Data sovereignty is the concept that data is subject to the laws of the country which it is processed in. In a world where there is a rapid adoption of SaaS, cloud and hosted services, it becomes obvious to see the issues that data sovereignty can have. In simpler times, data wasn’t something businesses needed to be concerned about and could be shared and transferred freely with no consequence. Businesses that also had a digital presence operated on a small scale and with low data demands hosted on on-premise infrastructure. This meant that data could be monitored and kept secure, much different from the more distributed and hybrid systems that many businesses use today. With so much data sharing and lack of regulation, it all came crashing down with the Cambridge Analytica scandal in 2016, promoting strict laws on privacy. ... When dealing with on-premise infrastructure, governance is clearer, as it must follow the rules of the country it’s in. However, when it’s in the cloud, a business can store its data in any number of locations regardless of where the business itself is.


How security leaders can build emotionally intelligent cybersecurity teams

EQ is important, as it has been found by Goleman and Cary Cherniss to positively influence team performance and to cultivate positive social exchanges and social support among team members. However, rather than focusing on cultivating EQ, cybersecurity leaders such as CISOs and CIOs are often preoccupied by day-to-day operations (e.g., dealing with the latest breaches, the latest threats, board meetings, team meetings and so on). In doing so, they risk overlooking the importance of the development and strengthening of their own emotional intelligence (EQ) and that of the individuals within their teams. As well as EQ considerations, cybersecurity leaders must also be conscious of the team’s makeup in terms of gender, age and cultural attributes and values. This is very relevant to cybersecurity teams as they are often hugely diverse. Such values and attributes will likely introduce a diverse set of beliefs defined by how and where an individual grew up and the values of their parents. 



Quote for the day:

"The mediocre leader tells The good leader explains The superior leader demonstrates The great leader inspires." -- Buchholz and Roth

Daily Tech Digest - June 05, 2021

The rise of cybersecurity debt

Complexity is the enemy of security. Some companies are forced to put together as many as 50 different security solutions from up to 10 different vendors to protect their sprawling technology estates — acting as a systems integrator of sorts. Every node in these fantastically complicated networks is like a door or window that might be inadvertently left open. Each represents a potential point of failure and an exponential increase in cybersecurity debt. We have an unprecedented opportunity and responsibility to update the architectural foundations of our digital infrastructure and pay off our cybersecurity debt. To accomplish this, two critical steps must be taken. First, we must embrace open standards across all critical digital infrastructure, especially the infrastructure used by private contractors to service the government. Until recently, it was thought that the only way to standardize security protocols across a complex digital estate was to rebuild it from the ground up in the cloud. But this is akin to replacing the foundations of a home while still living in it. You simply cannot lift-and-shift massive, mission-critical workloads from private data centers to the cloud.


Zero trust: The good, the bad and the ugly

Right from the start, the name zero trust has unwelcome implications. On the surface, it appears that management does not trust employees or that everything done on the network is suspect until proven innocent. "While this line of thinking can be productive when discussing the security architecture of devices and other digital equipment, security teams need to be careful that it doesn't spill over to informing their policy around an employer's most valuable asset, its people," mentioned Jason Meller, CEO and founder at Kolide. "Users who feel their privacy is in jeopardy, or who do not have the energy to continually justify why they need access to resources, will ultimately switch to using their own personal devices and services, creating a new and more dangerous problem—shadow IT," continued Meller. "Frustratingly, the ill-effects of not trusting users often forces them to become untrustworthy, which then in turn encourages IT and security practitioners to advocate for more aggressive zero trust-based policies." In the interview, Meller suggested the first thing organizations looking to implement zero trust should do is form a working group with representatives from human resources, privacy experts and end users themselves.


From Boardroom To Service Floor: How To Make Cybersecurity An Organizational Priority Now

Of course, companies don’t just want to identify risk. They want to prevent relevant threats and secure their IT infrastructure. To achieve this, boardrooms, C-suite executives and cybersecurity teams will need to focus on the most potent risks — from insider threats to misconfigured databases — to enhance their defensive posture to meet the moment. This should begin by addressing your in-house vulnerabilities. With so many data breaches caused, in part, by employees, companies can defend data by enhancing their educational and oversight protocols. For instance, employee monitoring that harnesses user behavior analytics can empower companies to identify employees who might be vulnerable to a phishing scam, allowing leaders to direct teaching and training to mitigate the risk. (Full disclosure: Employee monitoring is among my company’s key provisions.) Similarly, cybersecurity software that restricts data access, movement and manipulation can ensure that data is available on a need-to-know basis, reducing opportunities for negligence or accidents to undermine data security.


How Testers Can Contribute to Product Definition

The approach to closing the understanding gap that has proven successful is "listening before talking". In practice, this means meeting the stakeholders, learning about their motivation and goals, building relationships and establishing a collaboration – basically, a feedback loop. Next was to explore the clients’ needs and their user personas by either talking to product manager(s), reading industry-related articles, or analyzing customer data because each user persona has a different goal and therefore a different task to complete in our product. For me, it’s essential to understand these differences to learn what is important to each one of them and aim for the specific quality characteristics when providing feedback on design, user experience, or product requirements. ... Practically, the shorter the feedback loop, the better. To make it shorter, I try to be there when the project starts to kick off and requirements are shaped, or when first prototypes are done, and generally be proactive by asking what’s the next important thing, inviting different stakeholders for pairing and collaborating closely to discover and share important information about the product.

API Security Depends on the Novel Use of Advanced ML & AI

By creating API-driven applications, we have exposed a much bigger attack surface. That’s number one. Number two, of course, we have made it challenging to the attackers, but the attack surface being so much bigger now needs to be dealt with in a completely different way. The older class of applications took a rules-based system as the common approach to solve security use cases. Because they just had a single application and the application would not change that much in terms of the interfaces it exposed, you could build in rules to analyze how traffic goes in and out of that application. Now, when we break the application into multiple pieces, and we bring in other paradigms of software development, such as DevOps and Agile development methodologies, this creates a scenario where the applications are always rapidly changing. There is no way rules can catch up with these rapidly changing applications. We need automation to understand what is happening with these applications, and we need automation to solve these problems, which rules alone cannot do.


Everything You Need To Know About India’s Centre for Artificial Intelligence and Robotics

CAIR is involved in research and development in AI, robotics, command and control, networking, information and communication security, along with the development of mission-critical products for battlefield communication and management systems. CAIR was appraised for Capability Maturity Model Integration (CMMI) Maturity Level 2 in 2014 and has ISO 9001:2015 certification. As part of the Defence Research and Development Organisation (DRDO), robotics was one of the priority areas of CAIR, said V S Mahalingam, former director, CAIR. Mahalingam joined DRDO in 1986 and served in Electronics & Radar Development Establishment (LRDE) till 2000 before he moved to CAIR. “Concentrating on the development of totally indigenous robots, the lab developed a variety of controllers and manipulators for Gantry, Scara, and other types of robots. With the experience gained from these initial years, the lab developed an autonomous guided vehicle (AGV). The expertise in control systems required for robotics was applied to the development of control laws for Tejas fighter,” Mahalingam added.


How do I become a network architect?

For the most part, network architects fall into department management roles overseeing teams of network engineers, system administrators, and perhaps application developers. The goal of a network architect is to design efficient, reliable, cost-effective network infrastructures that meet the long-term information technology and business goals of an organization. The trick is to accomplish those long-term goals while also permitting the organization to meet its short-term business goals and financial obligations. ... Successful network architects must be able to see the big picture regarding current and future information technology infrastructure, not only for the organization but for the industry and general business environment as well. Individuals fulfilling the job role must be able to produce a documented vision of network infrastructure now and in the future. Documentation is important because a network architect must be able to present their vision of current and future network needs and goals to C-level management, employees, and other stakeholders. They must be able to communicate why their vision is correct, and why those stakeholders should provide the resources necessary to bring that vision into fruition.


The Beauty of Edge Computing

The volume and velocity of data generated at the edge is a primary factor that will impact how developers allocate resources at the edge and in the cloud. “A major impact I see is how enterprises will manage their cloud storage because it’s impractical to save the large amounts of data that the Edge creates directly to the cloud,” says Will Kelly, technical marketing manager for a container security startup. “Edge computing is going to shake up cloud financial models so let’s hope enterprises have access to a cloud economist or solution architect who can tackle that challenge for them.” With billions of industrial and consumer IoT devices being deployed, managing the data is an essential consideration in any edge-to-cloud strategy. “Advanced consumer applications such as streaming multiplayer games, digital assistants and autonomous vehicle networks demand low latency data so it is important to consider the tremendous efficiencies achieved by keeping data physically close to where it is consumed,” says Scott Schober, President/CEO of Berkeley Varitronics Systems, Inc. It’s not much of a stretch to view edge as an integral computing of the fast evolving hybrid cloud.


Is STG Building a New Cybersecurity Powerhouse?

The consensus is STG will likely form either a complete new company out of its newly acquired businesses - hoping the sum of the parts will make STG a major player in the security space - or simply allow customers to pull together a security plan on an a la carte basis from STG's various parts. "You can see a future where we're going to have a clash of some really sophisticated industry heavyweights. You're going to have to compete with Microsoft; you're going to have to compete with Cisco. So if you're going to get in a fight with Microsoft and Cisco, you better bring a big stick. And it looks like they've now got a big stick," says Frank Dickson, program vice president at IDC. Peter Firstbrook, vice president and analyst with Gartner, believes STG is putting together a portfolio to deliver a one-stop shopping experience for those looking for a suite of cybersecurity products and solutions to protect their organization. "One trend they could take advantage of is the propensity of buyers to seek out fewer, more strategic vendors that have integrated solutions," Firstbrook says. "Eighty percent of buyers want to consolidate the number of security products and vendors to make their security operations more efficient."


Using Distributed Tracing in Microservices Architecture

Observability is monitoring the behavior of infrastructure at a granular level. This facilitates maximum visibility within the infrastructure and supports the incident management team to maintain the reliability of the architecture. Observability is done by recording the system data in various forms (tools) such as metrics, alerts (events), logs, and traces. These functions help in deriving insights into the internal health of the infrastructure. Here, we are going to discuss the importance of tracing and how it evolved to a technique called distributed tracing. Tracing is continuous supervision of an application’s flow and data progression often representing a track of a single user’s journey through an app stack. These make the behavior and state of an entire system more obvious and comprehensible. Distributed request tracing is an evolutionary method of observability that helps to keep cloud applications in good health. Distributed tracing is the process of following a transaction request and recording all the relevant data throughout the path of microservices architecture.



Quote for the day:

"Every great leader can take you back to a defining moment when they decided to lead." -- John Paul Warren