Showing posts with label consulting. Show all posts
Showing posts with label consulting. Show all posts

Daily Tech Digest - July 10, 2025


Quote for the day:

"Strive not to be a success, but rather to be of value." -- Albert Einstein


Domain-specific AI beats general models in business applications

Like many AI teams in the mid-2010s, Visma’s group initially relied on traditional deep learning methods such as recurrent neural networks (RNNs), similar to the systems that powered Google Translate back in 2015. But around 2020, the Visma team made a change. “We scrapped all of our development plans and have been transformer-only since then,” says Claus Dahl, Director ML Assets at Visma. “We realized transformers were the future of language and document processing, and decided to rebuild our stack from the ground up.” ... The team’s flagship product is a robust document extraction engine that processes documents in the countries where Visma companies are active. It supports a variety of languages. The AI could be used for documents such as invoices and receipts. The engine identifies key fields, such as dates, totals, and customer references, and feeds them directly into accounting workflows. ... “High-quality data is more valuable than high volumes. We’ve invested in a dedicated team that curates these datasets to ensure accuracy, which means our models can be fine-tuned very efficiently,” Dahl explains. This strategy mirrors the scaling laws used by large language models but tailors them for targeted enterprise applications. It allows the team to iterate quickly and deliver high performance in niche use cases without excessive compute costs.


The case for physical isolation in data centre security

Hardware-enforced physical isolation is fast becoming a cornerstone of modern cybersecurity strategy. These physical-layer security solutions allow your critical infrastructure – servers, storage and network segments – to be instantly disconnected on demand, using secure, out-of-band commands. This creates a last line of defence that holds even when everything else fails. After all, if malware can’t reach your system, it can’t compromise it. If a breach does occur, physical segmentation contains it in milliseconds, stopping lateral movement and keeping operations running without disruption. In stark contrast to software-only isolation, which relies on the very systems it seeks to protect, hardware isolation remains immune to tampering. ... When ransomware strikes, every second counts. In a colocation facility, traditional defences might flag the breach, but not before it worms its way across tenants. By the time alerts go out, the damage is done. With hardware isolation, there’s no waiting: the compromised tenant can be physically disconnected in milliseconds, before the threat spreads, before systems lock up, before wallets and reputations take a hit. What makes this model so effective is its simplicity. In an industry where complexity is the norm, physical isolation offers a simple, fundamental truth: you’re either connected or you’re not. No grey areas. No software dependency. Just total certainty.


Scaling without outside funding: Intuitive's unique approach to technology consulting

We think for any complex problem, a good 60–70% of it can be solved through innovation. That's always our first principle. Then where we see any inefficiencies; be it in workflows or process, automation works for the other 20% of the friction. The remaining 10–20% is where the engineering plays its important role, and it allows to touch on the scale, security and governance aspects. In data specifically, we are referencing the last 5–6 years of massive investments. We partner with platforms like Databricks and DataMiner and we've invested in companies like TESL and Strike AI for securing their AI models. ... In the cloud space, we see a shift from migration to modernisation (and platform engineering). Enterprises are focussing on modernisation of both applications and databases because those are critical levers of agility, security, and business value. In AI it is about data readiness; the majority of enterprise data is very fragmented or very poor quality which makes any AI effort difficult. Next is understanding existing processes—the way work is done at scale—which is critical for enabling GenAI. But the true ROI is Agentic AI—autonomous systems which don’t just tell you what to do, but just do it. We’ve been investing heavily in this space since 2018. 


The Future of Professional Ethics in Computing

Recent work on ethics in computing has focused on artificial intelligence (AI) with its success in solving problems, processing large amounts of data, and with the award of Nobel Prizes to AI researchers. Large language models and chatbots such as ChatGPT suggest that AI will continue to develop rapidly, acquire new capabilities, and affect many aspects of human existence. Many of the issues raised in the ethics of AI overlap previous discussions. The discussion of ethical questions surrounding AI is reaching a much broader audience, has more societal impact, and is rapidly transitioning to action through guidelines and the development of organizational structure, regulation, and legislation. ... Ethics of digital technologies in modern societies raises questions that traditional ethical theories find difficult to answer. Current socio-technical arrangements are complex ecosystems with a multitude of human and non-human stakeholders, influences, and relationships. The questions of ethics in ecosystems include: Who are members? On what grounds are decisions made and how are they implemented and enforced? Which normative foundations are acceptable? These questions are not easily answered. Computing professionals have important contributions to make to these discussions and should use their privileges and insights to help societies navigate them.


AI Agents Vs RPA: What Every Business Leader Needs To Know

Technically speaking, RPA isn’t intelligent in the same way that we might consider an AI system like ChatGPT to mimic some functions of human intelligence. It simply follows the same rules over and over again in order to spare us the effort of doing it. RPA works best with structured data because, unlike AI, it doesn't have the ability to analyze and understand unstructured data, like pictures, videos, or human language. ... AI agents, on the other hand, use language models and other AI technologies like computer vision to understand and interpret the world around them. As well as simply analyzing and answering questions about data, they are capable of taking action by planning how to achieve the results they want and interacting with third-party services to get it done. ... Using RPA, it would be possible to extract details about who sent the mail, the subject line, and the time and date it was sent. This can be used to build email databases and broadly categorize emails according to keywords. An agent, on the other hand, could analyze the sentiment of the email using language processing, prioritize it according to urgency, and even draft and send a tailored response. Over time, it learns how to improve its actions in order to achieve better resolutions.


How To Keep AI From Making Your Employees Stupid

Treat AI-generated content like a highly caffeinated first draft – full of energy, but possibly a little messy and prone to making things up. Your job isn’t to just hit “generate” and walk away unless you enjoy explaining AI hallucinations or factual inaccuracies to your boss (or worse, your audience). Always, always edit aggressively, proofread and, most critically, fact-check every single output. This process isn’t just about catching AI’s mistakes; it actively engages your critical thinking skills, forcing you to verify information and refine expression. Think of it as intellectual calisthenics. ... Don’t settle for the first answer AI gives you. Engage in a dialogue. Refine your prompts, ask follow-up questions, request different perspectives and challenge its assumptions. This iterative process of refinement forces you to think more clearly about your own needs, to be precise in your instructions, and to critically evaluate the nuances of the AI’s response. ... The MIT study serves as a crucial wake-up call: over-reliance on AI can indeed make us “stupid” by atrophying our critical thinking skills. However, the solution isn’t to shun AI, but to engage with it intelligently and responsibly. By aggressively editing, proofreading and fact-checking AI outputs, by iteratively refining prompts and by strategically choosing the right AI tool for each task, we can ensure AI serves as a powerful enhancer, not a detrimental crutch.


What EU’s PQC roadmap means on the ground

The EU’s PQC roadmap is broadly aligned with that from NIST; both advise a phased migration to PQC with hybrid-PQC ciphers and hybrid digital certificates. These hybrid solutions provide the security promises of brand new PQC algorithms, whilst allowing legacy devices that do not support them, to continue using what’s now being called ‘classical cryptography’. In the first instance, both the EU and NIST are recommending that non-PQC encryption is removed by 2030 for critical systems, with all others following suit by 2035. While both acknowledge the ‘harvest now, decrypt later’ threat, neither emphasise the importance of understanding the cover time of data; nor reference the very recent advancements in quantum computing. With many now predicting the arrival of cryptographically relevant quantum computers (CRQC) by 2030, if organizations or governments have information with a cover time of five years or more, it is already too late for many to move to PQC in time. Perhaps the most significant difference that EU organizations will face compared to their American counterparts, is that the European roadmap is more than just advice; in time it will be enforced through various directives and regulations. PQC is not explicitly stated in EU regulations, although that is not surprising.


The trillion-dollar question: Who pays when the industry’s AI bill comes due?

“The CIO is going to be very, very busy for the next three, four years, and that’s going to be the biggest impact,” he says. “All of a sudden, businesspeople are starting to figure out that they can save a ton of money with AI, or they can enable their best performers to do the actual job.” Davidov doesn’t see workforce cuts matching AI productivity increases, even though some job cuts may be coming. ... “The costs of building out AI infrastructure will ultimately fall to enterprise users, and for CIOs, it’s only a question of when,” he says. “While hyperscalers and AI vendors are currently shouldering much of the expense to drive adoption, we expect to see pricing models evolve.” Bhathena advises CIOs to look beyond headline pricing because hidden costs, particularly around integrating AI with existing legacy systems, can quickly escalate. Organizations using AI will also need to invest in upskilling employees and be ready to navigate increasingly complex vendor ecosystems. “Now is the time for organizations to audit their vendor agreements, ensure contract flexibility, and prepare for potential cost increases as the full financial impact of AI adoption becomes clearer,” he says. ... Baker advises CIOs to be careful about their purchases of AI products and services and tie new deployments to business needs.


Multi-Cloud Adoption Rises to Boost Control, Cut Cost

Instead of building everything on one platform, IT leaders are spreading out their workloads, said Joe Warnimont, senior analyst at HostingAdvice. "It's no longer about chasing the latest innovation from a single provider. It's about building a resilient architecture that gives you control and flexibility for each workload." Cost is another major factor. Even though hyperscalers promote their pay-as-you-go pricing, many enterprises find it difficult to predict and manage costs at scale. This is true for companies running hundreds or thousands of workloads across different regions and teams. "You'd think that pay-as-you-go would fit any business model, but that's far from the case. Cost predictability is huge, especially for businesses managing complex budgets," Warnimont said. To gain more control over pricing and features, companies are turning to alternative cloud providers, such as DigitalOcean, Vultr and Backblaze. These platforms may not have the same global footprint as AWS or Azure but they offer specialized services, better pricing and flexibility for certain use cases. An organization needing specific development environments may go to DigitalOcean. Another may chose Vultr for edge computing. Sometimes the big players just don't offer what a specific workload requires. 


How CISOs are training the next generation of cyber leaders

While Abousselham champions a personalized, hands-on approach to developing talent, other CISOs are building more formal pathways to support emerging leaders at scale. For others like PayPal CISO Shaun Khalfan, structured development was always part of his career. He participated in formal leadership training programs offered by the Department of Defense and those run by the American Council for Technology. ... Structured development is also happening inside companies like the insurance brokerage firm Brown & Brown. CISO Barry Hensley supports an internal cohort program designed to identify and grow emerging leaders early in their careers. “We look at our – I’m going to call it newer or younger – employees,” he explains. “And if you become recognized in your first, second, or third year as having the potential to [become a leader], you get put in a program,” he explains. ... Khalfan believes good CISOs should be able to dive deep with engineers while also leading boardroom conversations. “It’s been a long time since I’ve written code,” he says, “but I at least understand how to have a deep conversation and also be able to have a board discussion with someone.” Abousselham agrees that technical experience is only one part of the puzzle. 

Daily Tech Digest - May 17, 2025


Quote for the day:

“Only those who dare to fail greatly can ever achieve greatly.” -- Robert F.


Top 10 Best Practices for Effective Data Protection

Your first instinct may be to try to keep up with all your data, but this may be a fool's errand. The key to success is to have classification capabilities everywhere data moves, and rely on your DLP policy to jump in when risk arises. Automation in data classification is becoming a lifesaver thanks to the power of AI. AI-powered classification can be faster and more accurate than traditional ways of classifying data with DLP. Ensure any solution you are evaluating can use AI to instantly uncover and discover data without human input. ... Data loss prevention (DLP) technology is the core of any data protection program. That said, keep in mind that DLP is only a subset of a larger data protection solution. DLP enables the classification of data (along with AI) to ensure you can accurately find sensitive data. Ensure your DLP engine can consistently alert correctly on the same piece of data across devices, networks, and clouds. The best way to ensure this is to embrace a centralized DLP engine that can cover all channels at once. Avoid point products that bring their own DLP engine, as this can lead to multiple alerts on one piece of moving data, slowing down incident management and response. Look to embrace Gartner's security service edge approach, which delivers DLP from a centralized cloud service. 


4 Keys To Successful Change Management From The Bain Playbook

From the start, Bain was crystal clear about its case for change, according to Razdan. The company prioritized change management, which meant IT partnering with finance; it also meant cultivating a mindset conducive to change. “We owned the change; we identified a group of high performers within our finance and our IT teams. This community of super-users could readily identify and deal with any of the problems that typically arise in an implementation of this size and scale,” Mackey said. “This was less just changing their technology; it’s changing employee behaviors and setting us up for how we want to grow and change processes going forward.” ... “We actually set up a program to be always measuring the value,” Razdan said. “You have internal stakeholders, you have external stakeholders, you have partnerships; we kind of built an ecosystem of governance and partnership that enabled us to keep everybody on the same page because transparency and communication is critical to success.” Gauging progress via transparent key performance indicators was all the more impressive, given that most of this happened during the worldwide, pandemic-driven move to remote work. “We could assess the implementation, as we went through it, to keep us on track [and] course correct,” Mackey said. 


Emerging AI security risks exposed in Pangea's global study

A significant finding was the non-deterministic nature of large language model (LLM) security. Prompt injection attacks, a method where attackers manipulate input to provoke undesired responses from AI systems, were found to succeed unpredictably. An attack that fails 99 times could succeed on the 100th attempt with identical input, due to the underlying randomness in LLM processing. The study also revealed substantial risks of data leakage and adversarial reconnaissance. Attackers using prompt injection can manipulate AI models to disclose sensitive information or contextual details about the environment in which the system operates, such as server types and network access configurations. 'This challenge has given us unprecedented visibility into real-world tactics attackers are using against AI applications today,' said Oliver Friedrichs, Co-Founder and Chief Executive Officer of Pangea. 'The scale and sophistication of attacks we observed reveal the vast and rapidly evolving nature of AI security threats. Defending against these threats must be a core consideration for security teams, not a checkbox or afterthought.' Findings indicated that basic defences, such as native LLM guardrails, left organisations particularly exposed. 


Dynamic DNS Emerges as Go-to Cyberattack Facilitator

Dynamic DNS (DDNS) services automatically update a domain name's DNS records in real-time when the Internet service provider changes the IP address. Real-time updating for DNS records wasn't needed in the early days of the Internet when static IP addresses were the norm. ... It sounds simple enough, yet bad actors have abused the services for years. More recently, though, cybersecurity vendors have observed an increase in such activity, especially this year. The notorious cybercriminal collective Scattered Spider, for instance, has turned to DDNS to obfuscate its malicious activity and impersonate well-known brands in social engineering attacks. This trend has some experts concerned about a rise in abuse and a surge in "rentable" subdomains. ... In an example of an observed attack, Scattered Spider actors established a new subdomain, klv1.it[.]com, designed to impersonate a similar domain, klv1.io, for Klaviyo, a Boston-based marketing automation company. Silent Push's report noted that the malicious domain had just five detections on VirusTotal at the time of publication. The company also said the use of publicly rentable subdomains presents challenges for security researchers. "This has been something that a lot of threat actors do — they use these services because they won't have domain registration fingerprints, and it makes it harder to track them," says Zach Edwards, senior threat researcher at Silent Push.


The Growing and Changing Threat of Deepfake Attacks

To ensure their deepfake attacks are convincing, malicious actors are increasingly focusing on more believable delivery, enhanced methods, such as phone number spoofing, SIM swapping, malicious recruitment accounts and information-stealing malware. These methods allow actors to convincingly deliver deepfakes and significantly increase a ploy’s overall credibility. ... High-value deepfake targets, such as C-suite executives, key data custodians, or other significant employees, often have moderate to high volumes of data available publicly. In particular, employees appearing on podcasts, giving interviews, attending conferences, or uploading videos expose significant volumes of moderate- to high-quality data for use in deepfakes. This dictates that understanding individual data exposure becomes a key part of accurately assessing the overall enterprise risk of deepfakes. Furthermore, ACI research indicates industries such as consulting, financial services, technology, insurance and government often have sufficient publicly available data to enable medium-to high-quality deepfakes. Ransomware groups are also continuously leaking a high volume of enterprise data. This information can help fuel deepfake content to “talk” about genuine internal documents, employee relationships and other internal details. 


Binary Size Matters: The Challenges of Fitting Complex Applications in Storage-Constrained Devices

Although we are here focusing on software, it is important to say that software does not run in a vacuum. Having an understanding of the hardware our programs run on and even how hardware is developed can offer important insights into how to tackle programming challenges. In the software world, we have a more iterative process, new features and fixes can usually be incorporated later in the form of over-the-air updates, for example. That is not the case with hardware. Design errors and faults in hardware can at the very best be mitigated with considerable performance penalties. These errors can introduce the meltdown and spectre vulnerabilities, or render the whole device unusable. Therefore the hardware design phase has a much longer and rigorous process before release than the software design phase. This rigorous process also impacts design decisions in terms of optimizations and computational power. Once you define a layout and bill of materials for your device, the expectation is to keep this constant for production as long as possible in order to reduce costs. Embedded hardware platforms are designed to be very cost-effective. Designing a product whose specifications such as memory or I/O count are wasted also means a cost increase in an industry where every cent in the bill of materials matters.


Cyber Insurance Applications: How vCISOs Bridge the Gap for SMBs

Proactive risk evaluation is a game-changer for SMBs seeking to maintain robust insurance coverage. vCISOs conduct regular risk assessments to quantify an organization’s security posture and benchmark it against industry standards. This not only identifies areas for improvement but also helps maintain compliance with evolving insurer expectations. Routine audits—led by vCISOs—keep security controls effective and relevant. Third-party risk evaluations are particularly valuable, given the rise in supply chain attacks. By ensuring vendors meet security standards, SMBs reduce their overall risk profile and strengthen their position during insurance applications and renewals. Employee training programs also play a critical role. By educating staff on phishing, social engineering, and other common threats, vCISOs help prevent incidents before they occur. ... For SMBs, navigating the cyber insurance landscape is no longer just a box-checking exercise. Insurers demand detailed evidence of security measures, continuous improvement, and alignment with industry best practices. vCISOs bring the technical expertise and strategic perspective necessary to meet these demands while empowering SMBs to strengthen their overall security posture.


How to establish an effective AI GRC framework

Because AI introduces risks that traditional GRC frameworks may not fully address, such as algorithmic bias and lack of transparency and accountability for AI-driven decisions, an AI GRC framework helps organizations proactively identify, assess, and mitigate these risks, says Heather Clauson Haughian, co-founding partner at CM Law, who focuses on AI technology, data privacy, and cybersecurity. “Other types of risks that an AI GRC framework can help mitigate include things such as security vulnerabilities where AI systems can be manipulated or exposed to data breaches, as well as operational failures when AI errors lead to costly business disruptions or reputational harm,” Haughian says. ... Model governance and lifecycle management are also key components of an effective AI GRC strategy, Haughian says. “This would cover the entire AI model lifecycle, from data acquisition and model development to deployment, monitoring, and retirement,” she says. This practice will help ensure AI models are reliable, accurate, and consistently perform as expected, mitigating risks associated with model drift or errors, Haughian says. ... Good policies balance out the risks and opportunities that AI and other emerging technologies, including those requiring massive data, can provide, Podnar says. “Most organizations don’t document their deliberate boundaries via policy,” Podnar says. 


How to Keep a Consultant from Stealing Your Idea

The best defense is a good offense, Thirmal says. Before sharing any sensitive information, get the consultant to sign a non-disclosure agreement (NDA) and, if needed, a non-compete agreement. "These legal documents set clear boundaries on what can and can't do with your ideas." He also recommends retaining records -- meeting notes, emails, and timestamps -- to provide documented proof of when and where the idea in question was discussed. ... If a consultant takes an idea and commercializes it, or shares it with a competitor, it's time to consult legal counsel, Paskalev says. The legal case's strength will hinge on the exact wording within contracts and documentation. "Sometimes, a well-crafted cease-and-desist letter is enough; other times, litigation is required." ... The best way to protect ideas isn't through contracts -- it's by being proactive, Thirmal advises. "Train your team to be careful about what they share, work with consultants who have strong reputations, and document everything," he states. "Protecting innovation isn’t just a legal issue -- it's a strategic one." Innovation is an IT leader's greatest asset, but it's also highly vulnerable, Paskalev says. "By proactively structuring consultant agreements, meticulously documenting every stage of idea development, and being ready to enforce protection, organizations can ensure their competitive edge."


Even the Strongest Leaders Burn Out — Here's the Best Way to Shake the Fatigue

One of the most overlooked challenges in leadership is the inability to step back from the work and see the full picture. We become so immersed in the daily fires, the high-stakes meetings, the make-or-break moments, that we lose the ability to assess the battlefield objectively. The ocean, or any intense, immersive activity, provides that critical reset. But stepping away isn't just about swimming in the ocean. It's about breaking patterns. Leaders are often stuck in cycles — endless meetings, fire drills, back-to-back calls. The constant urgency can trick you into believing that everything is critical. That's why you need moments that pull you out of the daily grind, forcing you to reset before stepping back in. This is where intentional recovery becomes a strategic advantage. Top-performing leaders across industries — from venture capitalists to startup founders — intentionally carve out time for activities that challenge them in different ways. ... The most effective leaders understand that managing their energy is just as important as managing their time. When energy levels dip, cognitive function suffers, and decision-making becomes less strategic. That's why companies known for their progressive workplace cultures integrate mindfulness practices, outdoor retreats and wellness programs — not as perks, but as necessary investments in long-term performance.

Daily Tech Digest - April 03, 2025


Quote for the day:

"The most difficult thing is the decision to act, the rest is merely tenacity." -- Amelia Earhart


Veterans are an obvious fit for cybersecurity, but tailored support ensures they succeed

Both civilian and military leaders have long seen veterans as strong candidates for cybersecurity roles. The National Initiative for Cybersecurity Careers and Studies, part of the US Cybersecurity and Infrastructure Security Agency (CISA), speaks directly to veterans, saying “Your skills and training from the military translate well to a cyber career.” NICCS continues, “Veterans’ backgrounds in managing high-pressure situations, attention to detail, and understanding of secure communications make them particularly well-suited for this career path.” Gretchen Bliss, director of cybersecurity programs at the University of Colorado at Colorado Springs (UCCS), speaks specifically to security execs on the matter: “If I were talking to a CISO, I’d say get your hands on a veteran. They understand the practical application piece, the operational piece, they have hands-on experience. They think things through, they know how to do diagnostics. They already know how to tackle problems.” ... And for veterans who haven’t yet mastered all that, Andrus advises “networking with people who actually do the job you want.” He also advises veterans to learn about the environment at the organization they seek to join, asking themselves whether they’d fit in. And he recommends connecting with others to ease the transition.


The 6 disciplines of strategic thinking

A strategic thinker is not just a good worker who approaches a challenge with the singular aim of resolving the problem in front of them. Rather, a strategic thinker looks at and elevates their entire ecosystem to achieve a robust solution. ... The first discipline is pattern recognition. A foundation of strategic thinking is the ability to evaluate a system, understand how all its pieces move, and derive the patterns they typically form. ... Watkins’s next discipline, and an extension of pattern recognition, is systems analysis. It is easy to get overwhelmed when breaking down the functional elements of a system. A strategic thinker avoids this by creating simplified models of complex patterns and realities. ... Mental agility is Watkins’s third discipline. Because the systems and patterns of any work environment are so dynamic, leaders must be able to change their perspective quickly to match the role they are examining. Systems evolve, people grow, and the larger picture can change suddenly. ... Structured problem-solving is a discipline you and your team can use to address any issue or challenge. The idea of problem-solving is self-explanatory; the essential element is the structure. Developing and defining a structure will ensure that the correct problem is addressed in the most robust way possible.


Why Vendor Relationships Are More Important Than Ever for CIOs

Trust is the necessary foundation, which is built through open communication, solid performance, relevant experience, and proper security credentials and practices. “People buy from people they trust, no matter how digital everything becomes,” says Thompson. “That human connection remains crucial, especially in tech where you're often making huge investments in mission-critical systems.” ... An executive-level technology governance framework helps ensure effective vendor oversight. According to Malhotra, it should consist of five key components, including business relationship management, enterprise technology investment, transformation governance, value capture and having the right culture and change management in place. Beneath the technology governance framework is active vendor governance, which institutionalizes oversight across ten critical areas including performance management, financial management, relationship management, risk management, and issues and escalations. Other considerations include work order management, resource management, contract and compliance, having a balanced scorecard across vendors and principled spend and innovation.


Shadow Testing Superpowers: Four Ways To Bulletproof APIs

API contract testing is perhaps the most immediately valuable application of shadow testing. Traditional contract testing relies on mock services and schema validation, which can miss subtle compatibility issues. Shadow testing takes contract validation to the next level by comparing actual API responses between versions. ... Performance testing is another area where shadow testing shines. Traditional performance testing usually happens late in the development cycle in dedicated environments with synthetic loads that often don’t reflect real-world usage patterns. ... Log analysis is often overlooked in traditional testing approaches, yet logs contain rich information about application behavior. Shadow testing enables sophisticated log comparisons that can surface subtle issues before they manifest as user-facing problems. ... Perhaps the most innovative application of shadow testing is in the security domain. Traditional security testing often happens too late in the development process, after code has already been deployed. Shadow testing enables a true shift left for security by enabling dynamic analysis against real traffic patterns. ... What makes these shadow testing approaches particularly valuable is their inherently low-maintenance nature. 


Rethinking technology and IT's role in the era of agentic AI and digital labor

Rethinking technology and the role of IT will drive a shift from the traditional model to a business technology-focused model. One example will be the shift from one large, dedicated IT team that traditionally handles an organization's technology needs, overseen and directed by the CIO, to more focused IT teams that will perform strategic, high-value activities and help drive technology innovation strategy as Gen AI handles many routine IT tasks. Another shift will be spending and budget allocations. Traditionally, CIOs manage the enterprise IT budget and allocation. In the new model, spending on enterprise-wide IT investments continues to be assessed and guided by the CIO, and some enterprise technology investments are now governed and funded by the business units. ... Today, agentic AI is not just answering questions -- it's creating. Agents take action autonomously. And it's changing everything about how technology-led enterprises must design, deploy, and manage new technologies moving forward. We are building self-driving autonomous businesses using agentic AI where humans and machines work together to deliver customer success. However, giving agency to software or machines to act will require a new currency. Trust is the new currency of AI.


From Chaos to Control: Reducing Disruption Time During Cyber Incidents and Breaches

Cyber disruptions are no longer isolated incidents; they have ripple effects that extend across industries and geographic regions. In 2024, two high-profile events underscored the vulnerabilities in interconnected systems. The CrowdStrike IT outage resulted in widespread airline cancellations, impacting financial markets and customer trust, while the Change Healthcare ransomware attack disrupted claims processing nationwide, costing billions in financial damages. These cases emphasize why resilience professionals must proactively integrate automation and intelligence into their incident response strategies. ... Organizations need structured governance models that define clear responsibilities before, during, and after an incident. AI-driven automation enables proactive incident detection and streamlined responses. Automated alerts, digital action boards, and predefined workflows allow teams to act swiftly and decisively, reducing downtime and minimizing operational losses. Data is the foundation of effective risk and resilience management. When organizations ensure their data is reliable and comprehensive, they gain an integrated view that enhances visibility across business continuity, IT, and security teams. 


What does an AI consultant actually do?

AI consulting involves advising on, designing and implementing artificial intelligence solutions. The spectrum is broad, ranging from process automation using machine learning models to setting up chatbots and performing complex analyses using deep learning methods. However, the definition of AI consulting goes beyond the purely technical perspective. It is an interdisciplinary approach that aligns technological innovation with business requirements. AI consultants are able to design technological solutions that are not only efficient but also make strategic sense. ... All in all, both technical and strategic thinking is required: Unlike some other technology professions, AI consulting not only requires in-depth knowledge of algorithms and data processing, but also strategic and communication skills. AI consultants talk to software development and IT departments as well as to management, product management or employees from the relevant field. They have to explain technical interrelations clearly and comprehensibly so that the company can make decisions based on this knowledge. Since AI technologies are developing rapidly, continuous training is important. Online courses, boot camps and certificates as well as workshops and conferences. 


Building a cybersecurity strategy that survives disruption

The best strategies treat resilience as a core part of business operations, not just a security add-on. “The key to managing resilience is to approach it like an onion,” says James Morris, Chief Executive of The CSBR. “The best strategy is to be effective at managing the perimeter. This approach will allow you to get a level of control on internal and external forces which are key to long-term resilience.” That layered thinking should be matched by clearly defined policies and procedures. “Ensure that your ‘resilience’ strategy and policies are documented in detail,” Morris advises. “This is critical for response planning, but also for any legal issues that may arise. If it’s not documented, it doesn’t happen.” ... Move beyond traditional monitoring by implementing advanced, behaviour-based anomaly detection and AI-driven solutions to identify novel threats. Invest in automation to enhance the efficiency of detection, triage, and initial response tasks, while orchestration platforms enable coordinated workflows across security and IT tools, significantly boosting response agility. ... A good strategy starts with the idea that stuff will break. So you need things like segmentation, backups, and backup plans for your backup plans, along with alternate ways to get back up and running. Fast, reliable recovery is key. Just having backups isn’t enough anymore.


3 key features in Kong AI Gateway 3.10

For teams working with sensitive or regulated data, protecting personally identifiable information (PII) in AI workflows is not optional, it’s essential for proper governance. Developers often use regex libraries or handcrafted filters to redact PII, but these DIY solutions are prone to error, inconsistent enforcement, and missed edge cases. Kong AI Gateway 3.10 introduces out-of-the-box PII sanitization, giving platform teams a reliable, enterprise-grade solution to scrub sensitive information from prompts before they reach the model. And if needed, reinserting sanitized data in the response before it returns to the end user. ... As organizations adopt multiple LLM providers and model types, complexity can grow quickly. Different teams may prefer OpenAI, Claude, or open-source models like Llama or Mistral. Each comes with its own SDKs, APIs, and limitations. Kong AI Gateway 3.10 solves this with universal API support and native SDK integration. Developers can continue using the SDKs they already rely on (e.g., AWS, Azure) while Kong translates requests at the gateway level to interoperate across providers. This eliminates the need for rewriting app logic when switching models and simplifies centralized governance. This latest release also includes cost-based load balancing, enabling Kong to route requests based on token usage and pricing. 


The future of IT operations with Dark NOC

From a Managed Service Provider (MSP) perspective, Dark NOC will shift the way IT operates today by making it more efficient, scalable, and cost-effective. It will replace Traditional NOC’s manual-intensive task of continuous monitoring, diagnosing, and resolving issues across multiple customer environments. ... Another key factor that Dark NOC enables MSPs is scalability. Its analytics and automation capability allows it to manage thousands of endpoints effortlessly without proportionally increasing engineers’ headcount. This enables MSPs to extend their service portfolios, onboard new customers, and increase profit margins while retaining a lean operational model. From a competitive point of view, adopting Dark NOC enables MSPs to differentiate themselves from competitors by offering proactive, AI-driven IT services that minimise downtime, enhance security and maximise performance. Dark NOC helps MSPs provide premium service at affordable price points to customers while making a decent margin internally. ... Cloud infrastructure monitoring & management (Provides real-time cloud resource monitoring and predictive insights). Examples include AWS CloudWatch, Azure Monitor, and Google Cloud Operations Suite.

Daily Tech Digest - March 07, 2025


Quote for the day:

"The actions of a responsible executive are contagious." -- Joe D. Batton


Operational excellence with AI: How companies are boosting success with process intelligence everyone can access

The right tooling can make a company’s processes visible and accessible to more than just its process experts. With strategic stakeholders and lines of business users involved, the very people who best know the business can contribute to innovation, design new processes and cut out endless wasted hours briefing process experts. AI, essentially, lowers the barrier to entry so everyone can come into the conversation, from process experts to line-of-business users. This speeds up time-to-value in transformation. ... Rather than simply ‘survive,’ companies can use AI to build true resilience — or antifragility — in which they learn from system failures or cybersecurity breaches and operationalize that knowledge. By putting AI into the loop on process breaks and testing potential scenarios via a digital twin of the organization, non-process experts and stakeholders are empowered to mitigate risk before escalations. ... Non-process experts must be able to make data-driven decisions faster with AI powered insights that recommend best practices and design principles for dashboards. Any queries that arise should be answered by means of automatically generated visualizations which can be integrated directly into apps — saving time and effort. 


Why Security Leaders Are Opting for Consulting Gigs

CISOs are asked to balance business objectives alongside product and infrastructure security, ransomware defense, supply chain security, AI governance, and compliance with increasingly complex regulations like the SEC's cyber-incident disclosure rules. Increased pressure for transparency puts CISOs in a tough situation when they must choose between disclosing an incident that could have adverse effects on the business or not disclosing it and risking personal financial ruin. ... The vCISO model emerged as a practical solution, particularly for midsize companies that need executive-level security expertise but can't justify a full-time CISO's compensation package. ... The surge in vCISOs should serve as a warning to boards and executives. If you're struggling to retain security leadership or considering a virtual CISO, you need to examine why. Is it about flexibility and cost, or have you created an environment where security leaders can't succeed? The pendulum will inevitably swing back as organizations realize that effective security leadership requires consistent, dedicated attention. ... Your CISO is working hard to protect your organization. So who will protect your CISO? Now is a great time to check in on them. Make sure they feel like they're fighting a winnable fight. 


How to Build a Reliable AI Governance Platform

An effective AI governance platform includes four fundamental components: data governance, technical controls, ethical guidelines and reporting mechanisms, says Beena Ammanath, executive director of the Global Deloitte AI Institute. "Data governance is necessary for ensuring that data within an organization is accurate, consistent, secure and used responsibly," she explains in an online interview. Technical controls are essential for tasks such as testing and validating GenAI models to ensure their performance and reliability, Ammanath says. "Ethical and responsible AI use guidelines are critical, covering aspects such as bias, fairness, and accountability to promote trust across the organization and with key stakeholders." ... "AI governance requires a multi-disciplinary or interdisciplinary approach and may involve non-traditional partners such as data science and AI teams, technology teams for the infrastructure, business teams who will use the system or data, governance and risk and compliance teams -- even researchers and customers," Baljevic says. Clark advises working across stakeholder groups. "Technology and business leaders, as well as practitioners -- from ML engineers to IT to functional leads -- should be included in the overall plan, especially for high-risk use case deployments," she says.


Reality Check: Is AI’s Promise to Deliver Competitive Advantage a Dangerous Mirage?

What happens when AI makes our bank’s products completely commoditized and undifferentiated? It’s not a defeatist question for the industry. Instead, it suggests a shortcoming in bank and credit union strategic planning about AI, Henrichs says. "Everyone’s asking about efficiency gains, risk management, and competitive advantages from AI," he suggests. "The uncomfortable truth is that if every bank has access to the same AI capabilities [and increasingly do through vendors like nCino, Q2, and FIS], we’re racing toward commoditization at an unprecedented speed." ... How can boards lead the institution to use AI to amplify existing competitive advantages? It’s not just about the technology. It’s "the combination of technology stack," say Jim Marous, Co-Publisher of The Financial Brand, with "people, leadership and willingness to take risks that will result in the quality of AI looking far different from bank A to bank Z. AI [is about] rethinking what we do. Further, fast follower doesn’t cut it because trying to copy… ignores the fundamental strategic changes [happening] behind the scenes." Creativity is not exactly a top priority in an industry accountable day-in and day-out to regulators, yet it’s required as technology applies commoditization pressure. 


A strategic playbook for entrepreneurs: 4 paths to success

To make educated choices as an entrepreneur, Scott and Stern recommend a sequential learning process known as test two, choose one for the four strategies within the compass. This is a systematic process where entrepreneurs consider multiple strategic alternatives and identify at least two that are commercially viable before choosing just one. As the authors write in their book, “The intellectual property and architectural strategies are worth testing for entrepreneurs who prefer to put in the work developing and maintaining proprietary technology; meanwhile, value chain and disruption may work better for leaders looking to execute quickly.” Scott referred to Vera Wang as a classic example of sequential learning. As a Ralph Lauren employee and bride-to-be at 35, Wang told her team that she felt there was an untapped market for older women shopping for wedding dresses. The company disagreed, so Wang opened her own shop — but she didn’t launch her line of dresses immediately. Instead, Scott said, Wang filled her shop with traditional dresses and offered only one new dress of her own. The goal was to see which types of customers were interested, as well as which aesthetics ultimately sold, before she started designing her new line. “[Wang] was able to take what she learned about design, customer, messaging, and price point and build it into her venture,” Scott said.


Increasing Engineering Productivity, Develop Software Fast and in a Sustainable Way

The real problem comes when speed means cutting corners - skipping tests, ignoring telemetry, rushing through code reviews. That might seem fine in the moment, but over time, it leads to tech debt and makes development slower, not faster. It’s kind of like skipping sleep to get more done. One late night? No problem. But if you do it every night, your productivity tanks. Same with software - if you never take time to clean up, everything gets harder to change. ... Software engineering productivity and sustainability are influenced by many factors and can mean different things to different people. For me, the two primary drivers that stand out are code quality and efficient processes. High-quality code is modular, readable, and well-documented, which simplifies maintenance, debugging, and scaling, while reducing the burden of technical debt. ... if the developers are not complaining enough, it’s probably because they’ve become complacent with, or resigned to, the status quo. In those cases, we can adopt the "we’re all one team" mindset and actually help them deliver features for a while – on the very clear understanding that we will be taking notes about everything that causes friction and then going and fixing that. That’s an excellent way to get the ground truth about how development is really going: listening, and hands-on learning.


Rethinking System Architecture: The Rise of Distributed Intelligence with eBPF

In an IT world driven by centralized decision-making, gathering insights and applying intelligence often follows a well-established — yet limiting — pattern. At the heart of this model, large volumes of telemetry, observability, and application data are collected by “dumb” data collectors. For analysis, these collectors gather information and ship it to centralized systems, such as databases, security information, event management (SIEM) platforms, or data warehouses. ... By processing data at its origin, we significantly reduce the amount of unnecessary or irrelevant data sent over the network, resulting in lower information transfer overhead. This minimizes the load on the infrastructure itself and cuts down on data storage and processing requirements. The scalability of our systems no longer needs to hinge on the ability to expand storage and analytics power, which is both expensive and inefficient. With eBPF, distributed systems can now analyze data locally, allowing the system to scale out more efficiently as each node can handle its own data processing needs without overwhelming a centralized point of control — and failure. Instead of transferring and storing every piece of data, eBPF can selectively extract the most relevant information, reducing noise and improving the overall signal quality.


How Explainable AI Is Building Trust in Everyday Products

Explainable AI has already picked up tremendous momentum in almost every industry. E-commerce platforms are now starting to avail detailed insight to the user on why a certain product is recommended to them. This reduces decision fatigue and improves the overall shopping experience. Even streaming services such as Netflix and Spotify make suggestions like “Because you watched…” or “Inspired by your playlist.” These insights make users much more connected with what they consume. In healthcare and fitness, the stakes are higher. Users literally rely on apps for critical insight into their health and well-being. Take a dietary suggestion or an exercise recommendation: If explainable AI provides insight into the whys, then users are more likely to feel knowledgeable and confident in those decisions. Even virtual assistants like Alexa and Google Assistant have added explainability features that provide much-needed context for their suggestions and enhance the user experience. ... Explainable AI has quite a number of challenges that stand in the way of its implementation. The need for simplifying such a very complex AI decision to some explainable form consumable by users is not a trivial task. The balance lies in clear explanations without oversimplification or misrepresentation of the logic.


IT execs need to embrace a new role: myth-buster

It’s more imperative than ever that IT leaders from the CIO on down educate their colleagues. It’s far too easy for eager early adopters to get into tech trouble, and it’s better to head off problems before your corporate data winds up, say, being used to train a genAI model. This teaching role is critical for high-ranking execs (C-level execs, board members) in addition to those on the enterprise front lines. CFOs tend to fall in love with promised efficiencies and would-be workforce reductions without understanding all of the implications. CEOs often want to support what their direct reports want — when possible — and board members rarely have any in-depth knowledge of technology issues. It’s especially critical for IT Directors, working with the CIO, to become indispensable sources of tech truth for any company. Not so long ago, business units almost always had to route their technology needs through IT. No more. It’s not a battle that can be won by edicts or directives. IT directives are often ignored by department heads, and memo mayhem won’t help. You have to position your advice as cautionary, educational — helpful even — all in a bid to spare the business unit various disasters. You are their friend. Only then does it have a chance of working. 


Increased Investment in Industrial Cybersecurity Essential for 2025

“The software used in machine controls and other components should be continuously updated by manufacturers to close newly discovered security gaps,” said the CEO of ONEKEY. He cites typical examples such as manufacturing robots, CNC machines, conveyors, packaging machines, production equipment, building automation systems, and heating and cooling systems, which, in some cases, rely on outdated software, making them targets for hackers. ... Firmware, the software embedded in digital control systems, connected devices, machines, and equipment, should be systematically tested for cyber resilience, advises Jan Wendenburg, CEO of ONEKEY. However, according to a report, less than a third (31 percent) of companies regularly conduct security checks on the software integrated into connected devices to identify and close vulnerabilities, thereby reducing potential entry points for hackers. ... Current practices fall far behind the required standards, as shown by the “OT + IoT Cybersecurity Report” by ONEKEY. ... “Manufacturers should align their software development with the upcoming regulatory requirements,” advised Jan Wendenburg. He added, “It is also recommended that the industry requires its suppliers to guarantee and prove the cyber resilience of their products.”

Daily Tech Digest - December 26, 2024

Best Practices for Managing Hybrid Cloud Data Governance

Kausik Chaudhuri, CIO of Lemongrass, explains monitoring in hybrid-cloud environments requires a holistic approach that combines strategies, tools, and expertise. “To start, a unified monitoring platform that integrates data from on-premises and multiple cloud environments is essential for seamless visibility,” he says. End-to-end observability enables teams to understand the interactions between applications, infrastructure, and user experience, making troubleshooting more efficient. ... Integrating legacy systems with modern data governance solutions involves several steps. Modern data governance systems, such as data catalogs, work best when fueled with metadata provided by a range of systems. “However, this metadata is often absent or limited in scope within legacy systems,” says Elsberry. Therefore, an effort needs to be made to create and provide the necessary metadata in legacy systems to incorporate them into data catalogs. Elsberry notes a common blocking issue is the lack of REST API integration. Modern data governance and management solutions typically have an API-first approach, so enabling REST API capabilities in legacy systems can facilitate integration. “Gradually updating legacy systems to support modern data governance requirements is also essential,” he says.


These Founders Are Using AI to Expose and Eliminate Security Risks in Smart Contracts

The vulnerabilities lurking in smart contracts are well-known but often underestimated. “Some of the most common issues include Hidden Mint functions, where attackers inflate token supply, or Hidden Balance Updates, which allow arbitrary adjustments to user balances,” O’Connor says. These aren’t isolated risks—they happen far too frequently across the ecosystem. ... “AI allows us to analyze huge datasets, identify patterns, and catch anomalies that might indicate vulnerabilities,” O’Connor explains. Machine learning models, for instance, can flag issues like reentrancy attacks, unchecked external calls, or manipulation of minting functions—and they do it in real-time. “What sets AI apart is its ability to work with bytecode,” he adds. “Almost all smart contracts are deployed as bytecode, not human-readable code. Without advanced tools, you’re essentially flying blind.” ... As blockchain matures, smart contract security is no longer the sole concern of developers. It’s an industry-wide challenge that impacts everyone, from individual users to large enterprises. DeFi platforms increasingly rely on automated tools to monitor contracts and secure user funds. Centralized exchanges like Binance and Coinbase assess token safety before listing new assets. 


Three best change management practices to take on board in 2025

For change management to truly succeed, companies need to move from being change-resistant to change-ready. This means building up "change muscles" -- helping teams become adaptable and comfortable with change over the long term. For Mel Burke, VP of US operations at Grayce, the key to successful change is speaking to both the "head" and the "heart" of your stakeholders. Involve employees in the change process by giving them a voice and the ability to shape it as it happens. ... Change management works best when you focus on the biggest risks first and reduce the chance of major disruptions. Dedman calls this strategy "change enablement," where change initiatives are evaluated and scored on critical factors like team expertise, system dependencies, and potential customer impact. High-scorers get marked red for immediate attention, while lower-risk ones stay green for routine monitoring to keep the process focused and efficient. ... Peter Wood, CTO of Spectrum Search, swears by creating a "success signals framework" that combines data-driven metrics with culture-focused indicators. "System uptime and user adoption rates are crucial," he notes, "but so are team satisfaction surveys and employee retention 12-18 months post-change." 


Corporate Data Governance: The Cornerstone of Successful Digital Transformation

While traditional data governance focuses on the continuous and tactical management of data assets – ensuring data quality, consistency, and security – corporate data governance elevates this practice by integrating it with the organization’s overall governance framework and strategic objectives. It ensures that data management practices are not operating in silos but are harmoniously aligned and integrated with business goals, regulatory requirements, and ethical standards. In essence, corporate data governance acts as a bridge between data management and corporate strategy, ensuring that every data-related activity contributes to the organization’s mission and objectives. ... In the digital age, data is a critical asset that can drive innovation, efficiency, and competitive advantage. However, without proper governance, data initiatives can become disjointed, risky, and misaligned with corporate goals. Corporate data governance ensures that data management practices are strategically integrated with the organization’s mission, enabling businesses to leverage data confidently and effectively. By focusing on alignment, organizations can make better decisions, respond swiftly to market changes, and build stronger relationships with customers. 


What is an IT consultant? Roles, types, salaries, and how to become one

Because technology is continuously changing, IT consultants can provide clients with the latest information about new technologies as they become available, recommending implementation strategies based on their clients’ needs. As a result, for IT consultants, keeping the pulse of the technology market is essential. “Being a successful IT consultant requires knowing how to walk in the shoes of your IT clients and their business leaders,” says Scott Buchholz, CTO of the government and public services sector practice at consulting firm Deloitte. A consultant’s job is to assess the whole situation, the challenges, and the opportunities at an organization, Buchholz says. As an outsider, the consultant can see things clients can’t. ... “We’re seeing the most in-demand types of consultants being those who specialize in cybersecurity and digital transformation, largely due to increased reliance on remote work and increased risk of cyberattacks,” he says. In addition, consultants with program management skills are valuable for supporting technology projects, assessing technology strategies, and helping organizations compare and make informed decisions about their technology investments, Farnsworth says.


Blockchain + AI: Decentralized Machine Learning Platforms Changing the Game

Tech giants with vast computing resources and proprietary datasets have long dominated traditional AI development. Companies like Google, Amazon, and Microsoft have maintained a virtual monopoly on advanced AI capabilities, creating a significant barrier to entry for smaller players and independent researchers. However, the introduction of blockchain technology and cryptocurrency incentives is rapidly changing this paradigm. Decentralized machine learning platforms leverage blockchain's distributed nature to create vast networks of computing power. These networks function like a global supercomputer, where participants can contribute their unused computing resources in exchange for cryptocurrency tokens. ... The technical architecture of these platforms typically consists of several key components. Smart contracts manage the distribution of computational tasks and token rewards, ensuring transparent and automatic execution of agreements between parties. Distributed storage solutions like IPFS (InterPlanetary File System) handle the massive datasets required for AI training, while blockchain networks maintain an immutable record of transactions and model provenance.


DDoS Attacks Surge as Africa Expands Its Digital Footprint

A larger attack surface, however, is not the only reason for the increased DDoS activity in Africa and the Middle East, Hummel says. "Geopolitical tensions in these regions are also fueling a surge in hacktivist activity as real-world political disputes spill over into the digital world," he says. "Unfortunately, hacktivists often target critical infrastructure like government services, utilities, and banks to cause maximum disruption." And DDoS attacks are by no means the only manifestation of the new threats that organizations in Africa are having to contend with as they broaden their digital footprint. ... Attacks on critical infrastructure and financially motived attacks by organized crime are other looming concerns. In the center's assessment, Africa's government networks and networks belonging to the military, banking, and telecom sectors are all vulnerable to disruptive cyberattacks. Exacerbating the concern is the relatively high potential for cyber incidents resulting from negligence and accidents. Organized crime gangs — the scourge of organizations in the US, Europe, and other parts of the world, present an emerging threat to organizations in Africa, the Center has assessed. 


Optimizing AI Workflows for Hybrid IT Environments

Hybrid IT offers flexibility by combining the scalability of the cloud with the control of on-premises resources, allowing companies to allocate their resources more precisely. However, this setup also introduces complexity. Managing data flow, ensuring security, and maintaining operational efficiency across such a blended environment can become an overwhelming task if not addressed strategically. To manage AI workflows effectively in this kind of setup, businesses must focus on harmonizing infrastructure and resources. ... Performance optimization is crucial when running AI workloads across hybrid environments. This requires real-time monitoring of both on-premises and cloud systems to identify bottlenecks and inefficiencies. Implementing performance management tools allows for end-to-end visibility of AI workflows, enabling teams to proactively address performance issues before they escalate. ... Scalability also supports agility, which is crucial for businesses that need to grow and iterate on AI models frequently. Cloud-based services, in particular, allow teams to experiment and test AI models without being constrained by on-premises hardware limitations. This flexibility is essential for staying competitive in fields where AI innovation happens rapidly.


The Cloud Back-Flip

Cloud repatriation is driven by various factors, including high cloud bills, hidden costs, complexity, data sovereignty, and the need for greater data control. In markets like India—and globally—these factors are all relevant today, points out Vishal Kamani – Cloud Business Head, Kyndryl India. “Currently, rising cloud costs and complexity are part of the ‘learning curve’ for enterprises transitioning to cloud operations.” ... While cloud repatriation is not an alien concept anymore, such reverse migration back to on-premises data centres is seen happening only in organisations that are technology-driven and have deep tech expertise, observes Gaurang Pandya, Director, Deloitte India. “This involves them focusing back on the basics of IT infrastructure which does need a high number of skilled employees. The major driver for such reverse migration is increasing cloud prices and performance requirements. In an era of edge computing and 5G, each end system has now been equipped with much more computing resources than it ever had. This increases their expectations from various service providers.” Money is a big reason too- especially when you don’t know where is it going.


Why Great Programmers fail at Engineering

Being a good programmer is about mastering the details — syntax, algorithms, and efficiency. But being a great engineer? That’s about seeing the bigger picture: understanding systems, designing for scale, collaborating with teams, and ultimately creating software that not only works but excels in the messy, ever-changing real world. ... Good programmers focus on mastering their tools — languages, libraries, and frameworks — and take pride in crafting solutions that are both functional and beautiful. They are the “builders” who bring ideas to life one line of code at a time. ... Software engineering requires a keen understanding of design principles and system architecture. Great code in a poorly designed system is like building a solid wall in a crumbling house — it doesn’t matter how good it looks if the foundation is flawed. Many programmers struggle to:Design systems for scalability and maintainability. Think in terms of trade-offs, such as performance vs. development speed. Plan for edge cases and future growth. Software engineering is as much about people as it is about code. Great engineers collaborate with teams, communicate ideas clearly, and balance stakeholder expectations. ... Programming success is often measured by how well the code runs, but engineering success is about how well the system solves a real-world problem.



Quote for the day:

"Ambition is the path to success. Persistence is the vehicle you arrive in." -- Bill Bradley