Showing posts with label code review. Show all posts
Showing posts with label code review. Show all posts

Daily Tech Digest - July 24, 2024

AI generated deepfake fraud drives public appetite for biometrics: FIDO Alliance

“People don’t need to be tech-savvy, the tools are easily accessible online. Deepfakes are as easy as self-service, and this accessibility introduces a significant risk to organizations. How can financial institutions protect themselves against, well, themselves?” The answer, he says, is reliable biometric detection capable of running digital video against biometrically captured data to weed out digital replicas. “Protecting against deepfakes includes layering your processes with multiple checks and balances, all designed to make it increasingly complicated for fraudsters to pull off a successful scam.” For user identity and accessibility checks, he says it is essential to offer “seamless biometric identity verification systems that don’t feel intrusive but do offer increased trust.” “Companies need a strict onboarding process that asks for both biometric and physical proof of identity; that way, security systems can immediately verify someone’s identity. This includes the use of liveness detection and deepfake detection – ensuring a real person is at the end of the camera – and ensuring secure and accurate information authentication and encryption.”


The State of DevOps in the Enterprise

Unfortunately, few if any sites have fully automated DevOps solutions that can keep pace with Agile, no-code and low-code application development -- although everyone has a vision of one day achieving improved infrastructure automation for their applications and systems. ... Infrastructure as code is a method that enables IT to pre-define IT infrastructure for certain types of applications that are likely to be created. By predefining and standardizing the underlying infrastructure components for running new applications on Linux, for instance, you can ensure repeatability and predictability of performance of any application deployed on Linux, which will speed deployments. ... If you’re moving to more operational automation and methods like DevOps and IaC that serve as back-ends to applications in Agile, no code and low code, cross-disciplinary teams of end users, application developers, QA, system programmers, database specialists and network specialists must team together in an iterative approach to application development, deployment and maintenance.


A Blueprint for the Future: Automated Workflow Design

Given the multitude of processes organizations manage, the ability to edit existing workflows or start not from scratch but from a best practice template, assisted by generative AI, holds good potential. I believe this represents another significant step toward enterprise autonomy. This is apt as Blueprint nicely fits into Pega’s messaging that is centred on the concept of the autonomous enterprise. ... In the future, we could see Process Intelligence (PI) integrated with templates and generative AI, pushing the automation of the design process even further. PI identifies which workflows need improving and where. By feeding these insights into an intelligent workflow design tool like Blueprint, we could eventually see workflows being automatically updated to resolve the identified issues. Over time, we might even reach a point where a continuous automated process improvement cycle can be established. This cycle would start with PI capturing insights and feeding them into a Blueprint-like tool to generate updated and improved workflows. These would then be fed into an automated test and deployment platform to complete the improvement, overseen by a supervising AI or human. 


Considerations for AI factories

The new way of thinking is that the “rack is the new server” enables data center operators to create a scalable solution by thinking at the rack level. Within a rack, an entire solution for AI training can be self-contained, with expansion for higher needs for performance readily available. A single rack can contain up to eight servers, each with eight interconnected GPUs. Then, each GPU can communicate with many other GPUs located in the rack, as the switches can be contained in the rack. The same communication can be set up between racks for scaling beyond a single rack, enabling a single application to use thousands of GPUs. Within an AI factory, different GPUs can be used. Not all applications or their agreed-upon SLAs require the fastest GPUs on the market today. Less powerful GPUs may be entirely adequate for many environments and will typically consume less electricity. In addition, these very dense servers with GPUs require liquid cooling, which is optimal if the coolant distribution unit (CDU) is also located within the rack, which reduces the hose lengths.


5 Agile Techniques To Help Avoid a CrowdStrike-Like Issue

Agile is exceptionally good at giving a safe playpen to look around a project, for issues the team may not have focused on initially. It channels people’s interest in areas without losing track of resources. By definition, no one in an organization will spend time considering the possible outcome of things that they have no experience of. However, by pushing on the boundaries of a project, even if based only on hunches or experience, insights arrive. Even if the initial form of a problem cannot be foreseen, the secondary problems can often be. ... The timebox correctly assumes that if a solution requires jumping down a deep rabbit hole, then the solution may not be applicable in the time constraints of the project. This is a good way to understand how no software is an “ultimate solution,” but simply the right way to do things for now, given the resources available. ... Having one member of a team question another member is healthy, but can also create friction. Sometimes the result is just an additional item on a checklist, but sometimes it can trigger a major rethink of the project as a whole.


How to review code effectively: A GitHub staff engineer’s philosophy

Code reviews are impactful because they help exchange knowledge and increase shipping velocity. They are nice, linkable artifacts that peers and managers can use to show how helpful and knowledgeable you are. They can highlight good communication skills, particularly if there’s a complex or controversial change needed. So, making your case well in a code review can not only guide the product’s future and help stave off incidents, it can be good for your career. ... As a reviewer, clarity in communication is key. You’ll want to make clear which of your comments are personal preference and which are blockers for approval. Provide an example of the approach you’re suggesting to elevate your code review and make your meaning even clearer. If you can provide an example from the same repository as the pull request, even better—that further supports your suggestion by encouraging consistent implementations. By contrast, poor code reviews lack clarity. For example, a blanket approval or rejection without any comments can leave the pull request author wondering if the review was thorough. 


Goodbye? Attackers Can Bypass 'Windows Hello' Strong Authentication

Smirnov says his discovery does not indicate that WHfB is insecure. "The insecure part here is not regarding the protocol itself, but rather how the organization forces or does not force strong authentication," he says. "Because what's the point of phishing-resistant authentication if you can just downgrade it to something that is not phishing-resistant?" Smirnov maintains that because of how the WHfB protocol is designed, the entire architecture is phishing resistant. "But since Microsoft, back at the time, had no way to allow organizations to enforce sign-in using this phishing-resistant authentication method, you could always downgrade to a lesser secure authentication method like password and SMS-OTP," Smirnov says. When a user initially registers Windows Hello on their device, the WHiB's authentication mechanism creates a private key credential stored in the computer's TPM. The private key is inaccessible to an attacker because it is sandboxed on the TPM, therefore requiring an authentication challenge using a Windows Hello-compatible biometric key or PIN as a sign-in challenge.


Cybersecurity ROI: Top metrics and KPIs

The overall security posture of an organization can be quantified by tracking the number and severity of vulnerabilities before and after implementing security measures. A key indicator is the reduction in remediation activities while maintaining or improving the security posture. This can be measured in terms of work hours or effort saved. Traditional metrics for this measurement include the number of detected incidents, Mean Time to Detect (MTTD), Mean Time to Respond (MTTR), and patch management (average time to deploy fixes). Awareness training and measuring phishing success rates are also crucial. ... Evaluating the cost-effectiveness of risk mitigation strategies is paramount. This includes comparing the costs of various security measures against the potential losses from security incidents and tying that figure back to patch management, paired up against the number of vulnerabilities remediated. With modern programs, enterprises are empowered to remediate what matters most from a risk perspective. All in all, a remediation cost is a better measure of an organization’s overall security posture than the cost of an incident.


Agentic AI drives enterprises away from public clouds

Decoupled and distributed systems running AI agents require hundreds of lower-powered processors that need to run independently. Cloud computing is typically not a good fit for this. However, it can still be a node within these distributed AI agents that run on heterogeneous and complex deployments outside public cloud solutions. The ongoing maturation of agentic AI will further incentivize the move away from the public cloud. Enterprises will increasingly invest in dedicated hardware tailored to specific AI tasks, from intelligent Internet of Things devices to sophisticated on-premises servers. This transition will necessitate robust integration frameworks to ensure seamless interaction between diverse systems, optimizing AI operations across the board. ... Integrating agentic AI marks a significant pivot in enterprise strategy, driving companies away from public cloud solutions. By adopting non-public cloud technologies and investing in adaptable, secure, and cost-efficient infrastructure, enterprises can fully leverage the potential of agentic AI. 


Learn About Data Privacy and How to Navigate the Information Security Regulatory 

LandscapeRegulators have made it clear that they are actively monitoring compliance with new state privacy laws. Even if the scope of exposure is relatively low due to partial exemptions, documenting compliance can be key. While companies are struggling to keep up with the expanding patchwork, regulators are also struggling to find the manpower to investigate the huge scope of companies coming under their jurisdiction. ... With the continual rise in cyber threats and a constantly evolving regulatory landscape for data privacy and information security, staying on top of and complying with such obligations and ensuring robust measures to protect sensitive information remain critical priorities. ... Numerous international data protection laws also impact the timeshare industry, but these are the primary laws affecting American resorts. Additionally, the timeshare industry is subject to other sector-related regulations, such as the Payment Card Industry Data Security Standard (PCI DSS), which sets requirements for securing payment card information for any business that processes credit card transactions. 



Quote for the day:

“When we give ourselves permission to fail, we, at the same time, give ourselves permission to excel.” --Eloise Ristad

Daily Tech Digest - July 07, 2024

How Good Is ChatGPT at Coding, Really?

A study published in the June issue of IEEE Transactions on Software Engineering evaluated the code produced by OpenAI’s ChatGPT in terms of functionality, complexity and security. The results show that ChatGPT has an extremely broad range of success when it comes to producing functional code—with a success rate ranging from anywhere as poor as 0.66 percent and as good as 89 percent—depending on the difficulty of the task, the programming language, and a number of other factors. While in some cases the AI generator could produce better code than humans, the analysis also reveals some security concerns with AI-generated code. ... Overall, ChatGPT was fairly good at solving problems in the different coding languages—but especially when attempting to solve coding problems that existed on LeetCode before 2021. For instance, it was able to produce functional code for easy, medium, and hard problems with success rates of about 89, 71, and 40 percent, respectively. “However, when it comes to the algorithm problems after 2021, ChatGPT’s ability to generate functionally correct code is affected. It sometimes fails to understand the meaning of questions, even for easy level problems,” Tang notes.


What can devs do about code review anxiety?

A lot of folks reported that either they would completely avoid picking up code reviews, for example. So maybe someone's like, “Hey, I need a review,” and folks are like, “I'm just going to pretend I didn't see that request. Maybe somebody else will pick it up.” So just kind of completely avoiding it because this anxiety refers to not just getting your work reviewed, but also reviewing other people's work. And then folks might also procrastinate, they might just kind of put things off, or someone was like, “I always wait until Friday so I don't have to deal with it all weekend and I just push all of that until the very last minute.” So definitely you see a lot of avoidance. ... there is this misconception that only junior developers or folks just starting out experience code review anxiety, with the assumption that it's only because you're experiencing the anxiety when your work is being reviewed. But if you think about it, anytime you are a reviewer, you're essentially asked to contribute your expertise and so there is an element of, “If I mess up this review, I was the gatekeeper of this code. And if I mess it up, that might be my fault.” So there's a lot of pressure there.
 

Securing the Growing IoT Threat Landscape

What’s clear is that there should be greater collective responsibility between stakeholders to improve IoT security outlooks. A multi-stakeholder response is necessary, leading to manufacturers prioritising security from the design phase, to governments implementing legislation to mandate responsibility. Currently, some of the leading IoT issues relate to deployment problems. Alex suggests that IT teams also need to ensure default device passwords are updated and complex enough to not be easily broken. Likewise, he highlights the need for monitoring to detect malicious activity. “Software and hardware hygiene is essential, especially as IoT devices are often built on open source software, without any convenient, at scale, security hardening and update mechanisms,” he highlights. “Identifying new or known vulnerabilities and having an optimised testing and deployment loop is vital to plug gaps and prevent entry from bad actors.” A secure-by-design approach should ensure more robust protections are in place, alongside patching and regular maintenance. Alongside this, security features should be integrated from the start of the development process.


Beyond GPUs: Innatera and the quiet uprising in AI hardware

“Our neuromorphic solutions can perform computations with 500 times less energy compared to conventional approaches,” Kumar stated. “And we’re seeing pattern recognition speeds about 100 times faster than competitors.” Kumar illustrated this point with a compelling real-world application. ... Kumar envisions a future where neuromorphic chips increasingly handle AI workloads at the edge, while larger foundational models remain in the cloud. “There’s a natural complementarity,” he said. “Neuromorphics excel at fast, efficient processing of real-world sensor data, while large language models are better suited for reasoning and knowledge-intensive tasks.” “It’s not just about raw computing power,” Kumar observed. “The brain achieves remarkable feats of intelligence with a fraction of the energy our current AI systems require. That’s the promise of neuromorphic computing – AI that’s not only more capable but dramatically more efficient.” ... As AI continues to diffuse into every facet of our lives, the need for more efficient hardware solutions will only grow. Neuromorphic computing represents one of the most exciting frontiers in chip design today, with the potential to enable a new generation of intelligent devices that are both more capable and more sustainable.


Artificial intelligence in cybersecurity and privacy: A blessing or a curse?

AI helps cybersecurity and privacy professionals in many ways, enhancing their ability to protect systems, data, and users from various threats. For instance, it can analyse large volumes of data, spot anomalies, and identify suspicious patterns for threat detection, which helps to find unknown or sophisticated attacks. AI can also defend against cyber-attacks by analysing and classifying network data, detecting malware, and predicting vulnerabilities. ... The harmful effects of AI may be fewer than the positive ones, but they can have a serious impact on organisations that suffer from them. Clearly, as AI technology advances, so do the strategies for both protecting and compromising digital systems. Security professionals should not ignore the risks of AI, but rather prepare for them by using AI to enhance their capabilities and reduce their vulnerabilities. ... As attackers are increasingly leveraging AI, integrating AI defences is crucial to stay ahead in the cybersecurity game. Without it, we risk falling behind.” Consequently, cybersecurity and privacy professionals, and their organisations, should prepare for AI-driven cyber threats by adopting a multi-faceted approach to enhance their defences while minimising risks and ensuring ethical use of technology.


Intel is betting big on its upcoming Lunar Lake XPUs to change how we think of AI in our PCs

Designed with power efficiency in mind, the Lunar Lake architecture is ideal for portable devices such as laptops and notebooks. These processors balance performance and efficiency by integrating Performance Cores (P-cores) and Efficiency Cores (E-cores). This combination allows the processors to handle both demanding tasks and less intensive operations without draining the battery. The Lunar Lake processors will feature a configuration of up to eight cores, split equally between P-cores and E-cores. This design aims to improve battery life by up to 60 per cent, positioning Lunar Lake as a strong competitor to ARM-based CPUs in the laptop market. Intel anticipates that these will be the most efficient x86 processors it has ever developed. ... A major highlight of the Lunar Lake processors is the inclusion of the new Xe2 GPUs as integrated graphics. These GPUs are expected to deliver up to 80 per cent better gaming performance compared to previous generations. With up to eight second-generation Xe-cores, the Xe2 GPUs are designed to support high-resolution gaming and multimedia tasks, including handling up to three 4K displays at 60 frames per second with HDR.


Cyber Threats And The Growing Complexity Of Cybersecurity

Irvine envisions a future where the cybersecurity industry undergoes significant disruption, with a greater emphasis on data-driven risk management. “The cybersecurity industry is going to be disrupted severely. We start to think about cybersecurity more as a risk and we start to put more data and more dollars and cents around some of these analyses,” she predicted. As the industry matures, Dr. Irvine anticipates a shift towards more transparent and effective cybersecurity solutions, reducing the prevalence of smoke and mirrors in the marketplace. She also claims that “AI and LLM's will take over jobs. There will be automation, and we're going to need to upskill individuals to solve some of these hard problems. It's just a challenge for all of us to figure out how.” Kosmowski also remarked that the industry must remain on top of what will continue to be a definitive risk to organizations, “Over 86% of companies are hybrid and expect to remain hybrid for the foreseeable future, plus we know IT proliferation is continuing to happen at a pace that we have never seen before.”


The blueprint for data center success: Documentation and training

In any data center, knowledge is a priceless asset. Documenting configurations, network topologies, hardware specifications, decommissioning regulations, and other items mentioned above ensures that institutional knowledge is not lost when individuals leave the organization. So, no need to panic once the facility veteran retires, as you’ll already have all the information they have! This information becomes crucial for staff, maintenance personnel, and external consultants to understand every facet of the systems quickly and accurately. It provides a more structured learning path, facilitates a deeper understanding of the data center's infrastructure and operations, and allows facilities to keep up with critical technological advances. By creating a well-documented environment, facilities can rest assured knowing that authorized personnel are adequately trained, and vital knowledge is not lost in the shuffle, contributing to overall operational efficiency and effectiveness, and further mitigating future risks or compliance violations.


Why Knowledge Is Power in the Clash of Big Tech’s AI Titans

The advanced AI models currently under development across big tech -- models designed to drive the next class of intelligent applications -- must learn from more extensive datasets than the internet can provide. In response, some AI developers have turned to experimenting with AI-generated synthetic data, a risky proposition that could potentially put an entire engine at risk if even a small semblance of the learning model is inaccurate. Others have pivoted to content licensing deals for access to useful, albeit limited, proprietary training data. ... The real differentiating edge lies in who can develop a systemic means of achieving GenAI data validation, integrity, and reliability with a certificated or “trusted” designation, in addition to acquiring expert knowledge from trusted external data and content sources. These two twin pillars of AI trust, coupled with the raw computing and computational power of new and emerging data centers, will likely be the markers of which big tech brands gain the immediate upper hand.


Should Sustainability be a Network Issue?

The beauty of replacing existing network hardware components with energy-efficient, eco-friendly, small form factor infrastructure elements wherever possible is that no adjustments have to be made to network configurations and topology. In most cases, you're simply swapping out routers, switches, etc. The need for these equipment upgrades naturally occurs with the move to Wi-Fi 6, which requires new network switches, routers, etc., in order to run at full capacity. Hardware replacements can be performed on a phased plan that commits a portion of the annual budget each year for network hardware upgrades ... There is a need in some cases to have discrete computer networks that are dedicated to specific business functions, but there are other cases where networks can be consolidated so that resources such as storage and processing can be shared. ... Network managers aren’t professional sustainability experts—but local utility companies are. In some areas of the U.S., utility companies offer free onsite energy audits that can help identify areas of potential energy and waste reduction.



Quote for the day:

"It takes courage and maturity to know the difference between a hoping and a wishing." -- Rashida Jourdain

Daily Tech Digest - April 17, 2024

Are You Delivering on Developer Experience?

A critical concept in modern developer experience is the “inner loop” of feedback on code changes. When a developer has a quick and familiar system to get feedback on their code, it encourages multiple cycles of testing and experimentation before code is deployed to a final test environment or production. The “outer loop” of feedback involves a more formal process of proposing tests, merging changes, running integration and then end-to-end tests. When problems are found on the outer loop, the result is larger, slower deployments with developers receiving feedback hours or days after they write code. Outer loop testing can still be testing that is automated and kicked off by the original developer, but another common issue with feedback that comes later in the release cycle is that it comes from human testers or others in the release process. This often results in feedback that is symptomatic rather than identifying root causes. When feedback isn’t clear, it’s as bad or worse than unclear requirements: Developers can’t work quickly on problems they haven’t diagnosed, and they’ve often moved on to other projects in the time between deployment and finding an issue. 


The digital tapestry: Safeguarding our future in a hyper-connected world

Data centers, acting as the computational hearts, power grids as the electrical circulatory system, and communication networks as the interconnected neural pathways – these elements form the infrastructure that facilitates the flow of information, the very essence of modern life. But like any complex biological system, they have vulnerabilities. A sophisticated cyberattack can infiltrate a data center, disrupting critical services. A natural disaster can sever communication links, isolating entire regions. These vulnerabilities highlight the paramount importance of resilience. We must design and maintain infrastructure that can withstand these disruptions, adapt to changing demands, and recover swiftly from setbacks. This intricate dance becomes even more critical as we attempt to seamlessly integrate revolutionary technologies like artificial intelligence (AI) into the fabric of our critical infrastructure. As we know, AI offers incredible potential, functioning like a highly sophisticated adaptive learning algorithm within the data center and critical infrastructure network. 


5 Strategies To Get People To Listen To You At Work

Credibility is currency at work. It is built over time, not by title or position but through displays of integrity, expertise, and knowledge. To be considered credible we need to have something valuable to say, and we can hone that by investing in continuous learning, staying abreast of industry trends, and demonstrating an ability to contribute to the success of the team through our actions and contributions. ... Tailor your message to resonate with the concerns, interests, and communication preferences of those you’re addressing. Speaking to executives, for instance, demands clarity, brevity, and alignment with strategic goals. Anticipate their probing questions about risks and opportunities and emphasize the impact on the bottom line. ... When people come to speak with you, silence your phone and computer and give them your full attention. Ask them follow-up questions, take notes, and adopt a mindset of learning. By demonstrating genuine interest and appreciation for your team members’ viewpoints, you will foster a culture of collaboration and mutual respect that encourages others to listen to you in turn.


Thinking outside the code: How the hacker mindset drives innovation

The hacker mindset has a healthy disrespect for limitations. It enjoys challenging the status quo and looking at problems with a “what if” mentality: “what if a malicious actor did this?” or “what if we look at data security from a different angle? This pushes tech teams to think outside the code, and explore more unconventional solutions. In its essence, hacking is about creating new technologies or using existing technologies in unexpected ways. It’s about curiosity, the pursuit for knowledge, wondering “what else can this do?” I can relate this to movies like The Matrix; it’s about not accepting reality as a “read-only” situation. It’s about changing your technical reality, learning which software elements can be manipulated, changed or re-written completely. ... Curiosity is one of the most important elements to fuel growth. Organizations with a “question everything” attitude will be the first to adapt to new threats; first to seize opportunities; and last to become obsolete. For me, ideal organizations are tech-driven playgrounds that encourage experimentation and celebrate failure as progress.


SAS Viya and the pursuit of trustworthy AI

Ensuring ethical use of AI starts before a model is deployed—in fact, even before a line of code is written. A focus on ethics must be present from the time an idea is conceived and persist through the research and development process, testing, and deployment, and must include comprehensive monitoring once models are deployed. Ethics should be as essential to AI as high-quality data. It can start with educating organizations and their technology leaders about responsible AI practices. So many of the negative outcomes outlined here arise simply from a lack of awareness of the risks involved. If IT professionals regularly employed the techniques of ethical inquiry, the unintended harm that some models cause could be dramatically reduced. ... Because building a trustworthy AI model requires a robust set of training data, SAS Viya is equipped with strong data processing, preparation, integration, governance, visualization, and reporting capabilities. Product development is guided by the SAS Data Ethics Practice (DEP), a cross-functional team that coordinates efforts to promote the ideals of ethical development—including human centricity and equity—in data-driven systems. 


From skepticism to strength: The evolution of Zero Trust

The core concepts are the same. The principle of least privilege and assume breach mentality are still key. For example, backup management systems must be isolated on the network so that no unauthenticated users can access it. Likewise, the backup storage system itself must be isolated. Immutability is also key. Having backup data that cannot be changed or tampered with means if repositories are reached by attacks like ransomware, they cannot be affected by its malware. Assuming a breach also means businesses should not implicitly ‘trust’ their backups after an attack. Having processes to properly validate the backup or ‘clean’ it before attempting system recovery is vital to ensure you are not simply restoring a still-compromised environment. The final layer of distrust is to have multiple copies of your backups – fail-safes in case one (or more) are compromised. The best practice is to have three copies of your backup, two stored on different media types, one stored onsite, and one kept offline. With these layers of resilience, you can start to consider your backup as Zero Trust. With Zero Trust Data Resilience, just like zero trust, it is a journey. You cannot implement it all at once. 


Where in the world is your AI? Identify and secure AI across a hybrid environment“

Your AI strategy is as good as your data strategy,” says Brad Arkin, chief trust officer at Salesforce. “Organizations adopting AI must balance trust with innovation. Tactically, that means companies need to do their diligence — for example, taking the time to classify data and implement specific policies for AI use cases.” ... Threat vectors like the DNS or APIs connecting to backend or cloud-based data lakes or repositories, particularly over IoT (internet of things), constitute two major vulnerabilities to sensitive data, adds Julie Saslow Schroeder, a chief legal officer and pioneer in AI and data privacy laws and SaaS platforms. “By putting up insecure chatbots connecting to vulnerable systems, and allowing them access to your sensitive data, you could break every global privacy regulation that exists without understanding and addressing all the threat vectors.” ... Arkin says security is a shared responsibility between cloud/SaaS provider and enterprise customers, emphasizing optional detection controls like event monitoring and audit trails that help customers gain insights into who’s accessing their data, for what purpose, and the type of processing being done.


Where Are You on the Cybersecurity Readiness Index? Cisco Thinks You’re Probably Overconfident

As we noted, cybersecurity readiness is alarmingly low across the board. However, that’s not reflected in the confidence of the companies that responded to the Cisco study. Some 80% of respondents, down slightly from last year, say they’re moderate to very confident in their ability to stay resilient. Cisco believes their confidence is misplaced and that they have not assessed the scale of their challenges. I agree that confidence will only get companies in trouble. With cyber security, it’s best to maintain a healthy paranoia and plan for the worst. No one thinks they’ll get in a car accident from texting on their phones until it happens. That’s when people change their behavior. There are many other revealing takeaways in this nearly 30-page report. But there’s nothing more alarming that—even after decades of having it driven home and having boardrooms and c-suites supposedly buy in—cyber threats are still taken too lightly. There are gaps in maturity, coverage, talent, and self-awareness. The underlying cause of these gaps is hard to pin down. But it likely comes from how we can all hold contradictory beliefs in our heads simultaneously. We can all freely acknowledge that cybersecurity is a significant threat.


The Global Menace of the Russian Sandworm Hacking Team

The group's ambitions have long been global: "The group’s readiness to conduct cyber operations in furtherance of the Kremlin’s wider strategic objectives globally is ingrained in its mandate." Past attacks include a 2016 hack against the Democratic National Committee, the 2017 NotPetya wave of encrypting software and the 2018 unleashing of malware known as Olympic Destroyer that disrupted the winter Olympics being held in South Korea. The group has recently turned to mobile devices and networks including a 2023 attempt to deploy malware programmed to spy on Ukrainian battlefield management apps. According to Mandiant, the group is directing and influencing the development of "hacktivist" identities in a bid to augment the psychological effects of its operations. Especially following the February 2022 invasion, Sandworm has used a series of pro-Russian Telegram channels including XakNet Team and Solntsepek to claim responsibility for hacks and leak stolen information. Sandworm also appears to have a close relationship with CyberArmyofRussia_Reborn.


How AI is Transforming Traditional Code Review Practices

The most effective use of AI in software development marries its strengths with the irreplaceable intuition, creativity, and experience of human developers. This synergistic approach leverages AI for what it does best — speed, consistency, and automation — while relying on humans for strategic decision-making and nuanced understanding that AI (currently) cannot replicate. AI can now be used to address the challenges of traditionally human-centric process of code reviews. For example, AI can scan entire code repositories and workflow systems to understand the context in which the codebase runs. ... Future advancements will see AI evolve into the role of a collaborator, capable of more complex reasoning, offering design suggestions, best practices, and even predicting or simulating the impact of code changes on software functionality and performance. AI can provide deeper insights into code quality, offer personalized feedback, and play a key role in installing a culture of learning and improvement within development teams.



Quote for the day:

"It is in your moments of decision that your destiny is shaped." -- Tony Robbins

Daily Tech Digest - January 16, 2024

Why Pre-Skilling, Not Reskilling, Is The Secret To Better Employment Pipelines

In a landscape where the relevance of skills evolves, Zaslavski says that organizations should focus on selecting and advancing individuals based on their potential for learning skills like critical thinking and resiliency, instead of focusing on hard skills like coding. ... “By concentrating on these fundamental elements, as opposed to current technical proficiency or past work history, organizations position themselves with an agile and future-ready workforce. In this light, pre-skilling should be an integral part of employers’ talent strategy pre and post-hiring, from sourcing and recruiting to career pathing and employee engagement.” ... She points to areas like understanding if a potential or existing employee has the EQ and social skills needed to perform as part of a group. Or whether they have the curiosity and analytical intelligence needed to learn new hard skills as well as the ambition and work ethic to achieve results. “When people have learning ability, drive, and people skills, they will probably develop new skills faster than others,” she says.


Agile is a concept we all continuously talk about, but what is it really?

Empiricism, teams, user stories, iterations; they are all examples of tools that we use in Agile, but they are not its purpose. Agile is about empowering people to take control of their environment and give them complete freedom to discover how to use available tools in the most effective way. And this applies to the why too. People adopt Agile to increase efficiency, transparency, velocity, predictability, quality. But again all these are a result of Agile, not its goal. It is the mindset that makes it all possible. That is why it is “People and interactions above processes and tools”. To illustrate this, think about empiricism itself. Try introducing empiricism into an organisation mired in a culture of fear and control, and it doesn’t work, no matter what you do. You can’t force empiricism. People are too busy evading blame and manipulating information. Think about it, how often do people complain that the retrospective doesn’t deliver anything? Retrospectives where people just complain and nothing changes? 


What Will It Take to Adopt Secure by Design Principles?

What does the future of secure by design adoption look like? CISA is continuing its work alongside industry partners. “Part of our strategy is to collect data on attacks and understand what that data is telling us about risk and impact and derive further best practices and work with companies, and really other nations, to adopt these principles,” Zabierek shares. International collaboration on secure by design is reflected not only in this CISA initiative but also the Guidelines for Secure AI System Development. CISA and the UK’s National Cyber Security Centre (NCSC) led the development of those guidelines, and 16 other countries have agreed to them. But like the Secure by Design initiative, this framework is also non-binding. A software manufacturer’s timeline for adopting secure by design principles will depend on its appetite, resources and the complexity of its products. But the more demand from government and consumers, the more likely adoption will happen. Right now, CISA has no plans to track adoption. “We're more focused on collaborating with industry so that we can understand best practices and recommend further better guidelines,” says Zabierek.


Mastering the art of motivation

Once you’ve helped employees connect their dots, the best way to further motivate them is also the cheapest, easiest, and has the fewest unintended consequences. Compliment them on a job well done, whenever they’ve done a job well enough to be worth noting. Sure, there are wrong ways to use compliments as motivators. First and foremost the employee you’re complimenting must value your opinion. If they don’t they’ll write off your compliment as just so much noise. Second, a compliment from you should not be an easy compliment to earn. “I really like your belt,” isn’t going to inspire someone to work inventively and late. Third, with few exceptions compliments should be public. There’s little reason for you to be embarrassed about being pleased with someone’s efforts. With one caveat: Usually you’ll have one or two in your organization who routinely perform exceptionally well, but also one or two who are plodders — good enough and steady enough to keep around; not good enough or steady enough to earn your praise. Find a way to compliment them in public anyway — perhaps because you prize their reliability and lack of temperament.


Do you need GPUs for generative AI systems?

GPUs greatly enhance performance, but they do so at a significant cost. Also, for those of you tracking carbon points, GPUs consume notable amounts of electricity and generate considerable heat. Do the performance gains justify the cost? CPUs are the most common type of processors in computers. They are everywhere, including in whatever you’re using to read this article. CPUs can perform a wide variety of tasks, and they have a smaller number of cores compared to GPUs. However, they have sophisticated control units and can execute a wide range of instructions. This versatility means they can handle AI workloads, such as use cases that need to leverage any kind of AI, including generative AI. CPUs can prototype new neural network architectures or test algorithms. They can be adequate for running smaller or less complex models. This is what many businesses are building right now (and will be for some time) and CPUs are sufficient for the use cases I’m currently hearing about. CPUs are more cost-effective in terms of initial investment and power consumption for smaller organizations or individuals who have limited resources. 


How to create an AI team and train your other workers

Building an genAI team requires a holistic approach, according to Jayaprakash Nair head of Machine Learning, AI and Visualization at Altimetrik, a digital engineering services provider. To reduce the risk of failure, organizations should begin by setting the foundation for quality data, establish “a single source of truth strategy,” and define business objectives. Building a team that includes diverse roles such as data scientists, machine learning engineers, data engineers, domain experts, project managers, and ethicists/legal advisors is also critical, he said. “Each role will contribute unique expertise and perspectives, which is essential for effective and responsible implementation,” Nair said. "Management must work to foster collaboration among these roles, help align each function with business goals, and also incorporate ethical and legal guidance to ensure that projects adhere to industry guidelines and regulations." ... It's also important to look for people who like learning new technology, have a good business sense, and understand how the technology can benefit the company.


Data is the missing piece of the AI puzzle. Here's how to fill the gap

Companies looking to make progress in AI, says Labovich, must "strike a balance and acknowledge the significant role of unstructured data in the advancement of gen AI." Sharma agrees with these sentiments: "It is not necessarily true that organizations must use gen AI on top of structured data to solve highly complex problems. Oftentimes the simplest applications can lead to the greatest savings in terms of efficiency." The wide variety of data that AI requires can be a vexing piece of the puzzle. For example, data at the edge is becoming a major source for large language models and repositories. "There will be significant growth of data at the edge as AI continues to evolve and organizations continue to innovate around their digital transformation to grow revenue and profits," says Bruce Kornfeld, chief marketing and product officer at StorMagic. Currently, he continues, "there is too much data in too many different formats, which is causing an influx of internal strife as companies struggle to determine what is business-critical versus what can be archived or removed from their data sets."


3 ways to combat rising OAuth SaaS attacks

At their core, OAuth integrations are cloud apps that can access data on behalf of a user, with a defined permission set. When a Microsoft 365 user installs a MailMerge app to their Word, for example, they have essentially created a service principal for the app and granted it an extensive permission set with read/write access, the ability to save and delete files, as well as the ability to access multiple documents to facilitate the mail merge. The organization needs to implement an application control process for OAuth apps and determine if the application, like in the example above, is approved or not. ... Security teams should view user security through two separate lenses. The first is the way they access the applications. Apps should be configured to require multi-factor authentication (MFA) and single sign-on (SSO). ... Automated tools should scan the logs and report whenever an OAuth-integrated application is acting suspiciously. For example, applications that display unusual access patterns or geographical abnormalities should be regarded as suspicious. 


Cloud cost optimisation: Strategies for managing cloud expenses and maximising ROI

Instead of employing manual resources, streamlining cloud optimisation through automation could bring enhanced resource savings to the table. The auto-scaling program offered by Amazon Web Services (AWS) is a shining example of how firms can effectively streamline their cloud optimisation in a short time. The program also enables swift optimisation in response to the changing resource requirements of systems and servers. ... At the planning stage, firms need to justify the cloud budget and ensure that unexpected spending is reduced to the minimum. The same approach has to be followed in the building, deployment, and control phases so that any unexpected rise in budgets can be adjusted promptly without throwing the entire financial control into a tizzy. All these steps will help organisations develop a culture of cost-conscious cloud adoption and help them perform optimally while keeping costs in check. ... Incorporating cloud cost optimisation tools is a strategic approach for organisations to streamline expenditures and enhance ROI. 


Pull Requests and Tech Debt

The biggest disadvantage of pull requests is understanding the context of the change, technical or business context: you see what has changed without necessarily explaining why the change occurred. Almost universally, engineers review pull requests in the browser and do their best to understand what’s happening, relying on their understanding of tech stack, architecture, business domains, etc. While some have the background necessary to mentally grasp the overall impact of the change, for others, it’s guesswork, assumptions, and leaps of faith….which only gets worse as the complexity and size of the pull request increases. [Recently a friend said he reviewed all pull requests in his IDE, greatly surprising me: first I’ve heard of such diligence. While noble, that thoroughness becomes a substantial time commitment unless that’s your primary responsibility. Only when absolutely necessary do I do this. Not sure how he pulls it off!] Other than those good samaritans, mostly what you’re doing is static code analysis: within the change in front of you, what has changed, and does it make sense? You can look for similar changes, emerging patterns that might drive refactoring, best practices, or others doing similar.



Quote for the day:

"All leadership takes place through the communication of ideas to the minds of others." -- Charles Cooley

Daily Tech Digest - June 14, 2023

Malicious hackers are weaponizing generative AI

The headline here is not that this new threat exists; it was only a matter of time before threats powered by generative AI power showed up. There must be some better ways to fight these types of threats that are likely to become more common as bad actors learn to leverage generative AI as an effective weapon. If we hope to stay ahead, we will need to use generative AI as a defensive mechanism. This means a shift from being reactive (the typical enterprise approach today), to being proactive using tactics such as observability and AI-powered security systems. The challenge is that cloud security and devsecops pros must step up their game in order to keep out of the 24-hour news cycles. This means increasing investments in security at a time when many IT budgets are being downsized. If there is no active response to managing these emerging risks, you may have to price in the cost and impact of a significant breach, because you’re likely to experience one. Of course, it’s the job of security pros to scare you into spending more on security or else the worst will likely happen.


Avoiding the Pain of a ‘Resume-Driven Architecture’

A resume-driven architecture occurs when the interests of developers lead them to designs that no longer align with maximized impacts and outcomes for the organization. Often, the developer clings to a technology that provides them a greater level of control and, at least initially, a higher salary. Meanwhile, the organization gets an architecture that only a handful of people know how to manage and maintain, limiting the available talent pool and hindering future innovation. ... There’s no sense in investing resources in a bespoke architecture if it’s not providing you with any differentiation—especially when competitors are achieving the same outcome with fewer resources. Moreover, getting stuck in a Stage Two mindset when the field moves on to Stage Three (or, worse, Stage Four) and cuts you off from the next wave of innovation. Subsequent technology breakthroughs often build on top of—and interoperate with—the previous technology layers. If you’re stuck with a custom architecture when the industry has moved on, you can miss out on the next wave of innovation and fall further behind competitors.


In the Great Microservices Debate, Value Eats Size for Lunch

A key criterion for a service to be standing alone as a separate code base and a separately deployable entity is that it should provide some value to the users — ideally the end users of the application. A useful heuristic to determine whether or not a service satisfies this criterion is to think about whether most enhancements to the service would result in benefits perceivable by the user. If in a vast majority of updates the service can only provide such user benefit by having to also get other services to release enhancements, then the service has failed the criterion. ... Providing value is also about the cost efficiency of designing as multiple services versus combining as a single service. One such aspect that was highlighted in the Prime Video case was chatty network calls. This could be a double whammy because it not only results in additional latency before a response goes back to the user, but it might also increase your bandwidth costs. This would be more problematic if you have large or several payloads moving around between services across network boundaries. 


Enhancing Code Reviews with Conventional Comments

In software development, code reviews are a vital practice that ensures code quality, promotes consistency, and fosters knowledge sharing. Yet, at times, they can drive me absolutely bananas! However, the effectiveness of code reviews is contingent on clear, concise communication. This is where Conventional Comments play a pivotal role. Conventional Comments provide a standardized method of delivering and receiving feedback during code reviews, reducing misunderstandings and promoting more efficient discussions. Conventional Comments are a structured commenting system for code reviews and other forms of technical dialogue. They establish a set of predefined labels, such as nitpick, issue, suggestion, praise, question, thought, and notably, non-blocking. Each label corresponds to a specific comment type and expected response. ... By standardizing labels and formats, Conventional Comments enhance the clarity of comments, eliminating vague language and misunderstandings, ensuring all participants understand the intent and meaning of the comments.


How the modern CIO grapples with legacy IT

When reviewing products and services, Abernathy considers whether a technology still fits into requirements for simplicity of geographies, designs, platforms, applications, and equipment. “Driving for simplicity is of paramount importance because it increases quality, stability, value, agility, talent engagement and security,” she says. Other red flags for replacement include point solutions, duplicative solutions, or technologies that become very challenging because of unreasonable pricing models, inadequate support or instability. In some ways, moving to SaaS-based applications makes the review process simpler because decisions as to whether and when to update and refactor are up to the provider, Ivy-Rosser says. But while technology change decisions are the responsibility of the provider, if you’re modernizing in a hybrid world, you need to make sure your data is ready to move and that any changes don’t create privacy issues. With SaaS, the review should take a hard look at the issues surrounding ownership and control.


The psychological impact of phishing attacks on your employees

The aftermath of a successful phishing attack can be emotionally draining, leaving people feeling embarrassed and ashamed. The fear of accidentally clicking a phishing email can affect a person’s performance and productivity at work. Even simulated phishing attacks can cause stress when employees are lured with fake promises of bonuses or freebies. Furthermore, when phishing emails repeatedly get through security measures and are not neutralized, employees may view these as safe and click on them. This could ultimately lead to employees losing faith in their employer’s ability to protect them. ... Organizations owe it to their employees to be proactive. To ensure employees are protected, they should implement advanced technology that uses Artificial Intelligence and Machine Learning models, such as Natural Language Processing (NLP) and Natural Language Understanding. These tools can detect even the most advanced phishing attempts and will serve as a safety net.


Cyber liability insurance vs. data breach insurance: What's the difference?

Understanding the distinction is important, as cyber insurance is becoming an integral part of the security landscape. Many companies may have no choice but to find insurance as more organizations are requiring that their business partners have cyber coverage. Many traditional business insurance policies will simply not cover cyber incidents, considering them outside the scope of the agreement, which is why cyber insurance has become a separate form of protection. It’s also important to note that getting insurance isn’t guaranteed — insurers are increasingly asking for more proof that strong cybersecurity strategies are in place before agreeing to provide coverage. Many companies may have no choice but to meet such terms. Put simply, cyber liability insurance refers to coverage for third-party claims asserted against a company stemming from a network security event or data breach. Data breach insurance, on the other hand, refers to coverage for first-party losses incurred by the insured organization that has suffered a loss of data.
These leaders recognize that transformation investments remain critical to any business, and they plan to emerge from these volatile times armed with new business models and revenue streams. In short, they plan to continue winning through transformation, and they are laser-focused about how they will do it. You might even say they’re “outcomes obsessed.” ... Remember, your goal is to prune the tree so it can thrive—not just to go around sawing off branches. Any cuts must set up individuals, teams, and departments for long-term success, despite the short-term pain. One way I’ve seen successful leaders do this is by taking the choices they are considering (both cutting investments and expanding them) and mapping them out in terms of their expected financial and nonfinancial impact ... Top-performing companies look beyond functional excellence, and instead aim for enterprise-level reinvention that extends across the company’s business, operating, and technology models. You should too. These transformations enable you to strengthen ecosystems, close capability gaps, and better chart your future revenue streams. 


Don't Let Age Mothball Your IT Career

Age discrimination is a significant concern in the IT industry, Schneer says. “Some companies may prioritize younger workers who are perceived to be more tech-savvy and adaptable,” she notes. “However, experienced professionals bring valuable skills and knowledge that can be an asset to any organization.” Weitzel observes that it's difficult to know how prevalent age discrimination is in any industry. “But applicants can be proactive in combatting any false assumptions by showcasing upfront the current skills and recent experience that employers are seeking.” Age discrimination may be more prevalent in certain IT fields, such as software development or web design, where rapid advancements in technology can make older professionals feel less relevant, Schneer says. “However, roles that require extensive experience and expertise, such as IT management or cybersecurity, may be less susceptible to age bias.” When encountering suspected age bias, senior IT workers should document any incidents or patterns of behavior that suggest discrimination, Schneer advises.


Thinking Deductively to Understand Complex Software Systems

The main goal is to think through the role of tests in helping you understand complex code, especially in cases where you are starting from a position of unfamiliarity with the code base. I think most of us would agree that tests allow us to automate the process of answering a question like "Is my software working right now?". Since the need to answer this question comes up all the time, at least as frequently as you deploy, it makes sense to spend time automating the process of answering it. However, even a large test suite can be a poor proxy for this question since it can only ever really answer the question "Do all my tests pass?". Fortunately, tests can be useful in helping us answer a larger range of questions. In some cases they allow us to dynamically analyse code, enabling us to glean a genuine understanding of how complex systems operate, that might otherwise be hard won.



Quote for the day:

"To do great things is difficult; but to command great things is more difficult." -- Friedrich Nietzsche

Daily Tech Digest - November 15, 2022

The Chief Trust Officer Role Can Be the Next Career Step for CISOs

Many CISOs are already unofficially doing the work that comes with the CTrO role, according to Pollard. They are doing customer-facing work, navigating third-party risk management, and focusing on enterprise resilience. “CISOs that spend more time on customer-facing activity, they are at companies that grow faster,” Pollard asserted. “Cybersecurity touches revenue, and security leaders that are able to carve out the time to focus on customer activity help drive hyper growth.” CISOs who are driving growth for their companies are playing an important part on the leadership team, and if they’ve been in the role for a long enough time, it could be time to ask the question “What comes next?” CISOs who have been in their position for 48 months are due for a title-level promotion, according to Pollard. And CTrO is that next step. ... Through his research, Pollard is seeing the CTrO role filled at a number of organizations. Cisco has a chief trust officer. So does SAP. “We're not talking about small, innovative startups. We're talking about goliath businesses that recognize the importance of trust in what they do,” Pollard said.


How regulation of the metaverse could impact your business

The regulatory challenges faced by Web3 are currently much fresher, arguably more nuanced and in some cases, urgent. It cannot be regulated as a single entity, as its multitude of use cases demand a multitude of approaches. Specific rules governing the security and availability of systems, finance, archives, identity and IP rights will need to be set. The good news is that policymakers could leverage Web3’s benefits to impose regulation. As it’s based on decentralisation and automation, it’s not far-fetched to imagine the technology being used to enforce and automate taxation, for example. Currently, Web3 platforms like cryptocurrency exchanges or NFT marketplaces aren’t standardised, with inconsistent UX and language used to communicate concepts. Often, these platforms have little or no duty to educate about safety or establish protections, and while platforms like Coinbase and OpenSea do a good job here, it’s far from the norm and scams are still commonplace owing to lack of understanding.


Private 5G drives sustainable and agile industrial operations

Looking at business outcomes such as sustainability and agility, the partners regard industrial private 5G as an enabler of digital transformation in smart manufacturing to help deliver connected worker applications, mobile asset applications and untethered fixed industrial asset applications. The former are seen as able to increase visibility and intelligence through mobile digital tools, such as analytics, digital twins and augmented reality (AR), while mobile asset applications increase agility and efficiency with autonomous vehicles, such as automated guided vehicles (AGVs) and autonomous mobile robots (AMRs). The consortium’s tests were run according to an established test plan provided by Rockwell Automation with success criteria of zero faults. It outlined a series of test cases to establish reliable Ethernet/IP standard and safety (CIP Safety) I/O connections from a GuardLogix area controller, with a range of requested packet interval (RPI) settings – the rate at which the controller and the I/O exchange data – over the 5G RAN to the FLEX 5000 standard and safety I/O.


Who Moved My Code? An Anatomy of Code Obfuscation

The best security experts will tell you that there’s never an easy, or a single solution to protect your intellectual property, and combined measures, protection layers and methods are always required to establish a good protective shield. In this article, we focus on one small layer in source code protection: code obfuscation. Though it’s a powerful security method, obfuscation is often neglected, or at least misunderstood. When we obfuscate, our code becomes unintelligible, thus preventing unauthorized parties from easily decompiling, or disassembling it. Obfuscation makes our code impossible, (or nearly impossible), for humans to read or parse. Obfuscation is, therefore, a good safeguarding measure used to preserve the proprietary of the source code and protect our intellectual property. To better explain the concept of obfuscation, let’s take “Where’s Waldo” as an example. Waldo is a known illustrated character, always wearing his red and white stripy shirt and hat, as well as black-framed glasses. 


Should security systems be the network?

The appeal and real benefits of having the security systems be the whole network are clearest for smaller and midsized companies. They are more likely to have uniform and relatively simple needs, and also to have thinner staffing. They are more likely to have difficulty affording, attracting, and retaining the talent they need in both security and networking. So, having just one platform to become expert in, one platform to train new staff on or to outsource the management of lets them make the most of the staff they have. The benefits are less clear for larger company. These tend to have more complex environments and requirements, and are less likely to tolerate the risks of monoculture given they are better able to staff for and support a blended ecosystem. So, should security systems be the network? For smaller organizations, it looks viable with the caveats outlined above. For most larger organizations, I think the answer is currently no. Instead, they should focus on making their network systems a bigger part of the security infrastructure.


Democratization Is The Key To Upskill At Work And Improve ROI

Creating actionable data and analytics programs to educate employees is one of the most effective ways to bridge the skills gap. We have seen successes with executive-sponsored datathons or when companies gamify their learning experience. We also think it’s important for technical data experts to act as mentors to knowledge workers with domain expertise and guide them through the analytics process. We believe this collaboration between technical experts and domain experts will help organizations achieve breakthroughs with their data faster. Finally, analytics needs to be easy, not complex. Organizations should invest in technologies that move away from being highly dependent on writing code. ... Data and analytics generate ROI in many ways. First are the time savings. Organizations that shift from spreadsheet-based processes save several hours per week, sometimes up to a third of their time per worker – multiply this by all the domain experts and knowledge workers still stuck in spreadsheets and you’ve got some serious time savings. This is just the tip of the iceberg.


Top cybersecurity threats for 2023

Disgruntled employees can sabotage networks or make off with intellectual property and proprietary information, and employees who practice poor security habits can inadvertently share passwords and leave equipment unprotected. This is why there has been an uptick in the number of companies that use social engineering audits to check how well employee security policies and procedures are working. In 2023, social engineering audits will continue to be used so IT can check the robustness of its workforce security policies and practices. ... Cases of data poisoning in AI systems have started to appear. In a data poisoning, a malicious actor finds a way to inject corrupted data into an AI system that will skew the results of an AI inquiry, potentially returning an AI result to company decision makers that is false. Data poisoning is a new attack vector into corporate systems. One way to protect against it is to continuously monitor your AI results. If you suddenly see a system trending significantly away from what it has revealed in the past, it’s time to look at the integrity of the data.


Corporate execs confident on sustainability goals, admit more work needed

Efforts to achieve sustainability goals can broadly be grouped into several areas: green resources procurement, which includes sustainable energy and water; operational efficiency, which includes the IT value chain, supply chain and other scope 3 emission sources that make up 40% of all greenhouse gas emissions; and end of lifecycle, including circular economy or recycling products to create new ones. For example, data centers and cloud industries tend to focus on green energy procurement (since they use a lot of energy to power data centers) as well as operational efficiency to reduce power usage, according to Abhijit Sunil, a senior analyst with Forrester Research. “Standards are certainly evolving, and more and more organizations are held accountable for their commitments and how they take action towards it,” Sunil said. For example, Sunil noted, government scrutiny will continue to increase, holding more “greenwashers” accountable. Greenwashers are companies that deceptively purport that their products, aims and policies are environmentally friendly.


The office of 2023: Top workforce trends that will shape the year ahead

Roderick believes an overarching theme for the workplace in 2023 will be adjusting how employees work remotely. He says there could be an uptick in surveillance for remote workers that will allow managers to observe productivity, and executives could enforce return-to-office mandates as a reaction to a slowdown in business. ... "The world of work has been through huge changes since the pandemic, and it would be good not to see the positives of this change undone by a recession." Silverglate believes that technology, office redesign, and sustainability will all propel hybrid and remote working in 2023. Video conferencing became a staple in work-from-home practices, but VR is emerging to make the experience more immersive and productive. "When many are in person and a team member needs to be virtual, VR technology can truly reduce the perceived gap between the two, which is one of the largest complaints I've heard about the challenges of traditional video-conferencing technology as it relates to hybrid teams," he says.


From Async Code Reviews to Co-Creation Patterns

The way it goes is that once a developer thinks they are done with coding, they invite other team members to review their work. This is nowadays, typically done by raising a Pull Request and inviting others for a review. But, because reviewers are busy with their own work items and a plethora of other things happening in the team, they are not able to react immediately. So, while the author is waiting for a review, they also want to feel productive, thus they start working on something else instead of twiddling their thumbs and waiting for a review. Eventually, when reviewer(s) become available and provide feedback on the PR and/or ask for changes, the author of the PR is then not available because they are busy with something else. This delayed ping-pong communication can extend over several days/weeks and a couple of iterations, until the author and reviewer(s) converge on a solution they are both satisfied with and which gets merged into the main branch.



Quote for the day:

"How was your day? If your answer was "fine," then I don't think you were leading" -- Seth Godin