Daily Tech Digest - April 05, 2022

Want to build a relationship with your CIO? 5 things you shouldn't do

Increasingly, CIOs are tasked with the adoption and orchestration of many technology platforms that influence digital transformation and enable the business strategy. This adoption involves a delicate balance of ensuring adequate guardrails exist through technology and security governance, while at the same time striving for speed and agility. When an employee develops a reputation for being a blocker, it starts to create an adversarial relationship between the business and IT. That often leads to shadow IT, in which employees start going around the IT or information security departments to get things done. Certainly, IT and information security need to have a strong voice in highlighting risks or technical barriers, but it’s equally important that we do our best to be solution-oriented, finding creative ways to make technology work for the business and implementing it securely. When an employee becomes a chronic blocker to business objectives and every issue becomes an immovable object, it undermines the trust and collaboration that the CIO is working to promote with the business.


In search of elegance

Unfortunately, many respond to complexity by increasing complication—and thus increasing inelegance—through extensive bureaucracy with a rule for every contingency. Instead, leaders would be wise to pursue elegant simplicity: the fewest rules possible or, even better, a few rock-solid principles. This enables and empowers individuals and teams to quickly respond to the dynamism inherent in complexity. For example, look at Netflix’s five-word expense policy—“Act in Netflix’s best interest”—or Metro Bank’s customer service principle that requires only one person to say “yes” to a customer request, but two to say “no.” Authors Marc Effron and Miriam Ort, in their book, One Page Talent Management: Eliminating Complexity, Adding Value, argue, “Simplicity plays to basic human desires and cognitive processes. We crave it.” Insisting on simplicity rewards concise, coherent thinking and action; elegance recognizes and works with our core humanity. How better to engage and energize your workers? A fundamental challenge for leaders today is to reset and refocus their organizations to move with hope and confidence into an uncertain future.


Utilizing biological algorithms to detect cyber attacks

A standard approach to addressing spoofed domains is to compare them to a database of known domains and to look for differences. When an email arrives, the cybersecurity solution counts the number of changes between the attacker’s signature and each instance in the known domain database. If there are a few changes, the domain is deemed suspicious. Measuring the number of changes between two sequences in this traditional way is done via the Levenshtein distance. While this technique works in some instances – such as when it detects a spoofed domain like m1crosoft – it struggles to identify more significant obfuscations such as MlCR0S0FT (with an “L” in place of an “I” and zeros in place of the letter “O”). The Levenshtein distance metric also finds it challenging to distinguish between microsoft-support and a microsoft domain. Since the traditional method is sometimes insufficient in detecting phishing scams, researchers have turned to nature and to a method called biomimicry.


The Machines Are Coming: Financial Services Can Reduce the Blast Surface with Zero Trust

While the financial services industry has always been an attractive target for hackers, the impact of how work has changed during COVID-19 has raised the stakes even higher. Research done with UK-based IT and security professionals points out that most believe COVID-induced work-from-home practices and remote work are accelerating attack risks in the financial services industry. I’m sure no one was surprised by these revelations, given the attractiveness of financial services data, such as customer records and personally identifiable information…let alone the ability to actually steal money and other financial assets. Many of us also know that cyber thieves are using “machines” to do their dirty work, such as automated attack tools, as well as artificial intelligence and machine learning algorithms. Another challenge is that our industry has an increased use of what I call “ephemeral computing,” such as cloud services and on demand technology services. While cloud is arguably more secure than any single organization’s data center, misconfigurations and oversight can leave an organization’s crown jewel data exposed in public, as we’ve seen with an increased number of highly public stories.


Using Patterns to Drive a Transformation towards Agility

Looking at stories like the one outlined before, a pattern becomes visible: adaptability is mainly about organisational capabilities like situational awareness, clear alignment and focus on goals, and the ability to react fast to changes and to learn and improve and to deliver customer value constantly. Practically, there are many ways this can be accomplished. There is not “the one” blueprint that fits all organisations because much depends on the business, the environment, the evolution, and last but not least, the culture. This key insight triggered the idea to work on the travel guide for growing an adaptive organisation to give guidance, inspiration, orientation and ideas for experimenting in your concrete context so that you can find out what works for you - every transformation journey is different in the end. The idea of the travel guide is, that while we cannot give people a recipe for doing an agile transformation, we can share the transformation journeys we have lived through and show emerging success patterns that can guide others in their journey. 


An Introduction to Bluetooth Mesh Networking

Since the nodes in a mesh can act as repeaters, the range of the network can be extended beyond that of a single radio. Due to these advantages, wireless communication protocols that are designed for IoT applications have included mesh networking capability in their standards to enable scaling the network geographically through multi-hop operations. ... Many basic features of mesh networking are supported by all of these three protocols. For example, they all include the ability to self-heal, meaning that if a node is disabled or removed, the network reconfigures automatically to repair itself. However, there are major differences between these protocols. For instance, Bluetooth mesh uses a technique known as managed flooding to route data packets through the network where messages are simply broadcast to all nearby nodes, while Zigbee and Thread use the full routing technique in which. a specific path is chosen for the messages going from node A to node B. Such differences can have a significant impact on the network performance depending on the application requirements and conditions. Evaluating certain aspects of the Bluetooth mesh technology, such as the network latency, reliability, scalability, etc., might not be straightforward in some cases.


How retail is using digital twins

The biggest takeaway is how digital twins make it easy to visualize complex relationships between physical things, including product placement, physical customer journeys and the paths robots might take down store aisles for inventory and floor cleaning. Managers and staff can explore how layouts, schedules, team movements and customer journeys interact in one visualization tool. They can also visually assess the impact of a new store layout, schedule or technology might impact cleaning, restocking and staffing requirements. Digital twins also have the potential to improve customer experiences in various ways. They could help customers connect the dots between home improvement projects, required materials and materials costs. They could also help improve physical customer journeys within stores by organizing the order shopping lists to line up with a route through the store. ... Emerging tech like digital twins, mixed reality and computer vision help capture data about the home and keep track of all the details to reduce this friction. The Lowe’s app takes advantage of the lidar built into the latest iPhones to capture home measurements quickly.


The Black Swan Events in Distributed Systems

When a system is asked to do more work than it possibly can, something is eventually going to fail. Maybe the CPU usage is very high that it has become the bottleneck and user requests start to time out suggesting the users that system is down. Or maybe the disk space has become the bottleneck and system can not store any more data. In a normal system overload case, if the source of load or trigger is removed, the problem goes away and everything sets back to normal state or stable state. ... Once we put a blocklist to stop the traffic from offending IP addresses, the trigger will go away , load on the network returns back to normal , user traffic begins to go through and the system comes back to its stable state. It’s hard to prevent such overloading incidents but, usually easy to recover from them. There’s another class of overloading incidents that are much harder to resolve, where the system does not recover back to its stable state by just removing the initial trigger These incidents can cause system outages down for a long period of time, it could be hours or even days in some cases. This class of incidents that continue to keep going even after the initial trigger has been removed are called metastable failures.


Citrix® Modernizes Security to Accommodate Hybrid Work

Citrix Secure Private Access is a cloud-delivered, ZTNA solution that provides contextual access to IT-sanctioned applications whether they are deployed on-prem, or in the cloud and delivers security controls for managed, unmanaged and BYO devices. Using the solution, IT organizations can: Provide zero trust network access to all apps, with adaptive authentication to continually evaluate access based on end user roles, locations, device posture, and user risk profiles. Securely support distributed work and BYO programs without risking exposure to malicious content and web-borne threats.
Simplify IT while enhancing the digital workspace experience for users. They can also enact a fresh approach to security that accommodates the realities of work today by giving employees the flexibility to work where they want using the devices of their choice, while ensuring that corporate data and assets remain safe. And many already are. Take HDI Global. With a rapidly growing work-from-home staff in Brazil, the international insurance company had a choice to make: increase investments in traditional servers and virtual machines, or enact a more modern approach to securely delivering apps.


Generations in the Workplace: Stereotypes and Facts

According to this report, ingrained stereotypes about age are actually far more likely to damage an IT team than failing to account for generational differences. It says, “What might really matter at work are not actual differences between generations, but people’s beliefs that these differences exist. These beliefs can get in the way of how people collaborate with their colleagues and have troubling implications for how we people are managed and trained.” That's not to say that there are not some true differences among the generations. For example, when you look at the age when people get married (or if they get married at all), you will spot some notable disparities among various age groups. But those disparities might be smaller than you think. And even if many people in an age category share a particular trait, it doesn't mean that every person you work with from that category will have the characteristics you expect. So how should IT managers handle teams with members of varying ages? A good way to start is by examining your own attitudes to see if you are being shaped by prevailing opinions. 



Quote for the day:

"Listening to the inner voice trusting the inner voice is one of the most important lessons of leadership. " -- Warren Bennis

Daily Tech Digest - April 04, 2022

What are Governance Tokens? How Token Owners Shape a DAO's Direction

Governance tokens represent ownership in a decentralized protocol. They provide token holders with certain rights that influence a protocol’s direction. This could include which new products or features to develop, how to spend a budget, which integrations or partnerships should be pursued, and more. Generally speaking, exercising this influence can take two forms. First, governance token holders can propose changes through a formal proposal submission process. If certain criteria are met and the proposal goes to a vote, governance token holders can use their tokens to vote on the proposed changes. The specific mechanisms and processes through which these rights are exercised differ across protocols. ... In traditional corporations, a concentrated executive body—typically some combination of a C-Suite, board of directors, and shareholders—has sole discretion over decisions pertaining to the organization’s strategic direction. DAOs differ from traditional corporations in that they don’t have a centralized group of decision-makers; but they still need to make decisions that influence the organization’s future.


Remote work vs office life: Lots of experiments and no easy answers

"It's important that it's an iterative process because we're going to find out things that we didn't necessarily expect in our assumptions around how the styles of work that we will be carrying out may well change as we start to reach a balance," he says. Lloyds is examining the work that takes place in offices, branches and homes, and is thinking about how the bank will connect people across these spaces in what Kegode refers to as "a mindful way". Developing that understanding involves constant conversations and an analysis of the crossover between business demands, individual needs and team requirements. "It's always about looking at how we can use technology as an enabler to make us more human," he says. "How can we use technology to enhance our human traits and the things that make us unique that machines can't do?" Lloyds started introducing Microsoft Teams just before the pandemic, which served the bank well when lockdown began. While video-conferencing tech has kept workers productive during the past two years, the future of the workplace will require careful conversations about how tools are adopted and adapted.


PCI SSC Releases Data Security Standard Version 4.0

The PCI Security Standards Council on Thursday released the Payment Card Industry Data Security Standard version 4.0. The Council says that the latest version's improvements are intended to counter evolving threats and technologies, and the new version will enable innovative methods to combat new threats. Organizations currently use PCI DSS version 3.2.1. The council is allowing two years - until March 31, 2024 - for the industry to conduct training and provide education regarding implementation of the changes and updates in version 4.0. While the new standard will be considered best practice, the current version of PCI DSS will remain active during this time. After March 31, 2024, it will be retired over the next year, and the new requirements will become effective after March 31, 2025. The global payments industry received feedback on the latest changes over the course of three years, during which more than 200 organizations provided more than 6,000 suggestions to ensure the standard continues to meet the ever-changing landscape of payment security, the council says.


Building Trust with Responsible AI

User-centered reliable AI systems should be created using basic best practices for software systems and methods that address machine learning-specific problems. The following points should be kept in mind while designing a reliable and responsible AI. Consider augmenting and assisting users with a variety of options. One should use a human-centered design approach. This includes building a model with appropriate disclosures, clarity, and control for the users. Engage a wide range of users and use-case scenarios, and incorporate comments before and during the project’s development; Rather than using a single metric, you should use a combination to understand better the tradeoffs between different types of errors and experiences. Make sure your metrics are appropriate for the context and purpose of your system; for example, a fire alarm system should have a high recall, even if it implies a false alarm now and then; ML models will reflect the data they are trained on, so make sure you understand your raw data. If this isn’t possible, such as with sensitive raw data, try to comprehend your input data as much as possible while still maintaining privacy; Understand the limitations of your dataset and communicate them with the users whenever possible.


The CISO as brand enabler, customer advocate, and product visionary

Quantifying the value of a corporate brand is tough. But it’s clear that your organization’s brand is as much an asset as the devices and networks that the CISO is charged with protecting – in fact, the brand may be your organization’s largest single asset. A recent Forbes/MASB report states that brand assets drive approximately 20% of enterprise value on average. Doesn’t that sound like something worth protecting? Yes, the creation and growth of the brand is typically the responsibility of the marketing organization and the CMO (chief marketing officer). But it’s not unusual for marketing to feel like it’s outracing the other business functions, including the CISO, and they are anxious for everyone to “catch up” and join them. The CISO can act as a useful counterweight to help marketing achieve its goals safely, in good times and bad. For example, isn’t it important to fully coordinate a breach response between these two groups in a way that best preserves the value of your brand? Those brands that emerge out of a high-profile information security incident stronger don’t get there by accident.


Introducing the MeshaVerse: Next-Gen Data Mesh 2.0

When designing MeshaVerse, our primary focus was on preserving decentralization while ensuring data reliability, data quality, and scale. Our novel approach includes implementing Dymlink, a symlink in the data lakehouse, and a new SlinkSync (Symbolic link Sync), a symlink that links Dymlinks together – similar to a linked list. By establishing which symlinks can be composed as a set – using either a direct probable or indirect inverse probable match – we are able to infer the convergence criteria of a nondivergent series (i.e the compressed representation of the data) while always ensuring we stay within the gradient of the curve. As a result, we’re able to prevent an infinite recursion that can potentially stale all data retrieval from the Data Mesh. Stay tuned for a future blog, where we’ll dive deeper into this approach. The integrity of this virtual data is ensured in real-time and at scale using a more recent implementation of Databricks Brickchain, taking advantage of all global compute power and therefore offering the potential to store the entire planet’s data with a fraction of the footprint.


DAOs could revolutionize how startups are run

Blockchain technology has ushered in the creation of businesses that allow users greater control over the services they choose to use. These emerging services turn the top-down approach of traditional tech firms on its head, allowing patrons to have a say in the development of a new generation of Web3-based games, apps, and companies. VCs currently have a monopoly on decision-making in their chosen investments, giving them the power to dictate critical judgments and the direction of these companies. While this sounds fair in theory — given the money they provide — this can also mean that critical decisions get slowed, or the original vision for the company diverges entirely. However, under the Web3 model, it makes sense that key business decisions should be as decentralized as the infrastructure that underpins them. Decentralized voting via a token governance structure means that anyone — regardless of their ethnicity, creed or financial status — can get involved and benefit from being part of a like-minded community of peers, removed from the hierarchical structure of the standard business model.


5 things CIOs should know about cloud service providers

While cloud service providers may offer similar capabilities, they are not actually the same. Determining the best one for your unique requirements and goals is another critical piece of your strategy. “When working with cloud service providers, it’s important to align the platform with the company’s unique business objectives,” says Scott Gordon, enterprise architect director, cloud infrastructure services at Capgemini Americas. “Every organization has its own situation, and the cloud strategy must be catered to solve those customized business challenges to create value and results.” While there might be some plain-vanilla workloads where the choice of cloud service provider might not have overwhelming implications, most organizational realities are more complex. Thinking back to the advice from Haff and LaCour, this is again where specific motivations or goals have a big impact. Gordon notes, for example, the importance of evaluating the end-to-end life cycles of your on-premises applications and determining which ones will require modernization and/or migration at some point.


General Catalyst’s Clark Talks Opportunistic Investing in Tech

We have to balance thematic with what we refer to as opportunistic work. We have to pay attention and engage with companies that get referred to us through our founders and other parts of our network. There are other incubator functions--that is important for us to engage in because we don’t necessarily see everything as we view things thematically. It’s just impossible. We do some of our very best work when we are being more intentional. ... Another area is dynamic workforce, which is a little fuzzy. I fit things like Remote.com, Awardco, and Hopin into these things, as well as things like Loom and Glean where it’s not just the tools end users are using because they are much more project-based than they used to be. Now it’s like, “You’re going to do this project and when that’s done, there’s another one. Maybe you do two at once and the teams you work with are different.” It’s a different system that we’ve put in place. Distributed work is permanent now. We will get back in the office one, two, three days a week -- or not. 


Improving open source software governance stature

The first line of defense against vulnerable open source libraries is to scan a project's dependencies for libraries known to have security vulnerabilities. OWASP Dependency-Check is a tool that returns a report that identifies vulnerable dependencies, along with their common vulnerabilities and exposures (CVEs). There are different ways to run OWASP Dependency-Check, such as via a command-line interface, an Apache Maven plugin, an Ant task or a Jenkins plugin, which enables easy integration into any CI/CD pipeline. Using a tool that creates actionable reports is only as useful as the process enforced around the tool. Run OWASP Dependency-Check on a consistent schedule to scan the codebase against the latest updates of newly discovered CVEs. Dedicate time and plan for identified CVEs. When using open source dependencies, consider the licenses that govern their use. Licenses for open source projects define how to use, copy and distribute the software. Depending on the application's software and distribution types, the application's source code might not permit certain open source tools.



Quote for the day:

"Brilliant strategy is the best route to desirable ends with available means." -- Max McKeown

Daily Tech Digest - April 03, 2022

With Identity Management, Start Early for Less Tech Debt

Starting with a robust identity and access management (IAM) solution will give new projects a head start on the competition. Users will have access to more features earlier. Additionally, no growing pains also mean no tech debt. Any new project has challenges right from the start. Finishing the MVP is a high priority. Planning meetings to outline necessary features and requirements can suffer from scope creep. Every shortcut taken to deliver on time borrows against the future. Tech debt is a known cost, and many startups take on a significant amount. As any app needs users, it eventually will come down to planning the features and structures needed. Everyone is a user themselves, so it’s easy to come up with a variety of useful features. Single sign-on, social logins and multifactor authentication are all conventional IAM features included in the project scope and planned out for customers. Features and domain knowledge are designed around what the team thinks a user will need. A user’s footprint within your app gets built out in forms and user profile pages. Business data and user data are stored together.


Enterprise Architects Can Be Indispensable in the Boardroom

Data is enterprise currency, and executive management discussions in the boardroom are data-driven. A knowledgeable enterprise architect can show the board how data for business requirements are translated into technological specifications. EA can provide timely reports on the status of the current application landscape and IT inventory to provide data that addresses crucial boardroom evaluations and decision-making. Use reports to tie EA into business processes during regular meetings. Data can be used to illustrate real issues with simple diagrams and use cases, demonstrating options and concrete results. EA overlays on top of the business model can help boardroom members visualize cost, revenue, risk, and performance metrics to support decisions and track alignment with initiatives. The enterprise architect is the data guru of the boardroom. ... If you want to have a game in the boardroom, you must get to know the players. You need the sponsorship of executives who wield real influence and can promote engagement of EA initiatives. 


Europe’s AI Act contains powers to order AI models destroyed or retrained, says legal expert

The European Commission put out its proposal for an AI Act just over a year ago — presenting a framework that prohibits a tiny list of AI use cases, considered too dangerous to people’s safety or EU citizens’ fundamental rights to be allowed, while regulating other uses based on perceived risk — with a subset of “high risk” use cases subject to a regime of both ex ante (before) and ex post (after) market surveillance. In the draft Act, high-risk systems are explicitly defined as: Biometric identification and categorisation of natural persons; Management and operation of critical infrastructure; Education and vocational training; Employment, workers management and access to self-employment; Access to and enjoyment of essential private services and public services and benefits; Law enforcement; Migration, asylum and border control management; Administration of justice and democratic processes. Under the original proposal, almost nothing is banned outright — and most use cases for AI won’t face serious regulation under the Act as they would be judged to pose “low risk” so largely left to self regulate — with a voluntary code of standards and a certification scheme to recognize compliance AI systems.


Why a ruling on digital ID by Kenya's Supreme Court has global implications for online privacy

Kenya’s digital ID programme, called the National Integrated Identity Management System (NIIMS), was ruled illegal by the highest court because there was no clear documentation of the data privacy risks, nor was there a clear strategy for measuring, mitigating and dealing with those risks. Related concerns about data privacy and security have arisen in other digital ID platforms as well. For example, India’s Aadhaar is the world’s largest biometric digital ID system. Registration is linked to biometrics and demographics, and can connect to services including SIM cards, bank accounts, and government aid programmes, making financial systems more inclusive. Despite these advantages, Aadhaar has seen pushback regarding feasibility and privacy. ... A major risk surrounding biometrics in particular is that if, and when, an attacker obtains these credentials for a victim, they may be able to impersonate the victim indefinitely, since a user’s biometrics do not change. These risks can be mitigated using emerging technologies like computation over encrypted data with rotating keys. 


Why did AI pioneer Marvin Minsky oppose neural networks?

The Dartmouth Summer Research Project on Artificial Intelligence in 1956 is widely considered as the founding moment of artificial intelligence as a field: John Mccarthy, Marvin Minsky, Claude Shannon, Ray Solomonoff etc attended the eight-week long workshop held in New Hampshire. On the fiftieth anniversary of the conference, the founding fathers of AI returned to Dartmouth. When Minsky took the stage, Salk Institute professor Terry Sejnowski told him some AI researchers view him as the devil for stalling the progress of neural networks. “Are you the devil?” Sejnowski asked. Minsky brushed him off and went on to explain the limitations of neural networks, pointing out neural networks haven’t delivered the goods yet. But Sejnowski was persistent. He asked again: “Are you the devil?”. A miffed Minsky retorted: “Yes, I am.” Turing award winner Marvin Minsky has made major contributions in cognitive psychology, symbolic mathematics, artificial intelligence, robot manipulation, and computer vision. As an undergraduate student at Harvard, Minsky built SNARC, considered the ‘first neural network’ by many, using over 3000 vacuum tubes and a few components from the B-52 bomber.


Is the Future of Digital Identity Safe?

Although multifactor authentication is crucial for preventing a great percentage of attacks, however, is not enough – not in today’s rapidly changing threat landscape. Enterprises need to evolve their identity and access management policy towards a modernized authentication solution. As Uri and I agreed, we need to leverage multiple data layers that would allow us to map a legitimate behavior versus a malicious one. Not only do we need to examine contextual data like location and device, but we also need to consider behavioral insights, look at micro behaviors such as hesitation, distraction, and rest. Having all these data layers, we can then leverage machine learning to aggregate them into a coherent analysis that indicates abnormal behaviors. Besides enabling artificial intelligence and machine learning to enhance our security posture, it is equally important to consider customer experience. For example, the best authentication tools today rely on mobile applications. What happens if a ratio of your employees cannot use their mobile phone, or they are reluctant about their employee installing an app in their personal mobile? 


Metaverses and DAO: Are Crypto Enthusiasts Ready to Usher Them In? 

There are already many who see the metaverse as a tremendous and thrilling possibility. According to many observers, the venture will be a new chance for economies, working settings, and further interaction. However, the metaverse, like any technology, requires rigorous research and use to be sustainable. Cryptos were on fire last year over environmental degradation issues, and metaverse has to counteract this to emerge on top. There are some principles underlying the metaverse: data sovereignty, privacy and governance, and honesty. It also focuses on both diversity and utmost respect for users. To stay loyal to the metaverse's values, those who work on its future need to follow specific rules. In addition, the move allows long-term benefits. They can be environmental sensitivity, social responsibility, or fiscal prudence. The future of the metaverse looks like many different things for different people. The ability to create virtual worlds and draw people is a lucrative new career for some. Furthermore, there can be the incorporation of NFTs to give value to the virtual space on the metaverse and allow users to earn income.


Application-Layer Encryption Basics for Developers

You may be working across multiple infrastructures, and for instance, HTTPS only covers a small part of the data flow inside your infrastructure, if you need an extra layer of protection, because the data is sensitive, or it may go outside of a specific infrastructure. Most importantly, if you need to enforce access control with encryption. For example, if you think of something like end-to-end encryption in a chat app, for instance, the access control is the sender and receiver, are really the only people who can access that data. That's not enforced just with a bit on a server saying who's allowed to do what, it's enforced through control of cryptographic key material. It's very clear how to use that in chat. It's actually a generalizable capability that you can use across lots of different types of use cases. Like in that use case, application layer encryption improves privacy. In some cases, it improves privacy substantially. It's actually significantly harder for developers than just implementing something like HTTPS. 


What the media is missing about decentralized autonomous organizations

While we’ve only scratched the surface of the potential DAOs have to create a radically more transparent and equitable financial system, we’ve already seen projects emerge that are delivering real value to real people in the real world today. One example is the war in Ukraine, where UkraineDAO, set up by Russian art collective Pussy Riot and Trippy Labs, raised over $6.75 million worth of Ether (ETH) donated directly to Ukrainian defense efforts against Russia. While this amount may not shift the balance of the war, the rapid creation and scaling-up of UkraineDAO demonstrate the power of decentralized financial technologies to coordinate a disparate global group of individuals around a single cause to deliver tangible results. But, the value of DAOs goes beyond just raising funds for noble causes under duress. In fact, many DAOs are already providing sustainable value to participants across the world and even harnessing blockchain technology to take on some of the most pressing challenges of our time such as climate change. 


The Evolution to Service-Based Networking

As application delivery evolved, orchestrators such as Kubernetes, Mesos and Docker Swarm integrated discovery functionality to reduce the need for those manual scripts. And while that’s great, what does it mean for the evolution of networking? A lot, actually. Networking still needs to be based on service identity because that’s how orchestrators track things, but the shift away from static, IP-based networking toward a service-based networking solution that these service discovery features provided was perhaps the most impactful change to networking, making application identity the basis for networking policies. Networking’s transition to a service-identity-based networking requirement also has cascading effects on other workflows. The first, and arguably the most important, is on security. While service discovery may solve for tracking changes more dynamically, it doesn’t help you apply consistent security policies to those applications. As I mentioned earlier, enforcing security and access to sensitive data is a core networking requirement.



Quote for the day:

"To make a decision, all you need is authority. To make a good decision, you also need knowledge, experience, and insight." -- Denise Moreland

Daily Tech Digest - April 02, 2022

PaaS is back: Why enterprises keep trying to resurrect self-service developer platforms

As ever in enterprise IT, it’s a question of control. Or, really, it’s an attempt by organizations to find the right balance between development and operations, between autonomy and governance. No two enterprises will land exactly the same on this freedom continuum, which is arguably why we see every enterprise determined to build its own PaaS/cloud. Hearkening back to Coté’s comment, however, the costs associated with being a snowflake can be high. One solution is simply to enable developer freedom … up to a point. As Leong stressed: “I talk to far too many IT leaders who say, ‘We can’t give developers cloud self-service because we’re not ready for You build it, you run it!’ whereupon I need to gently but firmly remind them that it’s perfectly okay to allow your developers full self-service access to development and testing environments, and the ability to build infrastructure as code (IaC) templates for production, without making them fully responsible for production.” In other words, maybe enterprises needn’t give their developers the keys to the kingdom; the garage will do.


Why EA As A Subject Is A "Must Have" Now Than Ever Before?

Enterprise architecture as a subject and knowledge of reference architecture like IT4ITTM would help EA aspirants appreciate tools for managing a digital enterprise. As students, we know that various organizations are undergoing digital transformation. But hardly do we understand where to start the journey or how to go about the digital transformation if we are left on our own. Knowledge of the TOGAF® Architecture Development Method (ADM) would be a fantastic starting point to answer the abovementioned question. The as-is assessment followed by to-be assessment (or vice versa depending on context) across business, data, application and technology could be a practical starting point. The phase “Opportunities and Solutions” would help get a roadmap of several initiatives an enterprise can choose for its digital transformation. Enterprise Architecture as a subject in b-school would cut across various subjects and help students with a holistic view.


5 steps to minimum viable enterprise architecture

At Carrier Global Corp., CIO Joe Schulz measures EA success by business metrics such as how employee productivity is affected by application quality or service outages. “We don’t look at enterprise architecture as a single group of people who are the gatekeepers, who are more theoretical in nature about how something should work,” says Schulz. He uses reports and insights generated by EA tool LeanIX to describe the interconnectivity of the ecosystem as well the systems capabilities across the portfolio to identify redundancies or gaps. This allows the global provider of intelligent building and cold chain solutions to “democratize a lot of the decision-making…(to) bring all the best thinking and investment capacity across our organization to bear.” George Tsounis, chief technology officer at bankruptcy technology and services firm Stretto, recommends using EA to “establish trust and transparency” by informing business leaders about current IT spending and areas where platforms are not aligned to the business strategy. That makes future EA-related conversations “much easier than if the enterprise architect is working in a silo and hasn’t got that relationship,” he says.


3 strategies to launch an effective data governance plan

Develop a detailed lifecycle for access that covers employees, guests, and vendors. Don’t delegate permission setting to an onboarding manager as they may over-permission or under-permission the role. Another risk with handling identity governance only at onboarding is that this doesn’t address changes in access necessary as employees change roles or leave the company. Instead, leaders of every part of the organization should determine in advance what access each position needs to do their jobs—no more, no less. Then, your IT and security partner can create role-based access controls for each of these positions. Finally, the compliance team owns the monitoring and reporting to ensure these controls are implemented and followed. When deciding what data people need to access, consider both what they’ll need to do with the data and what level of access they need to do their jobs. For example, a salesperson will need full access to the customer database, but may need only read access to the sales forecast, and may not need any access to the accounts payable app.


The Profound Impact of Productivity on Your Soul

Finishing what you set out to do feels great. Have you ever had a rush of satisfaction after checking off that last item on your to-do list? Feeling satisfied and fulfilled about what you are doing is the essence of great productivity. Of course, it means you are getting stuff done, but you are also getting stuff that is actually important and meaningful. ... When we “do,” we share a piece of ourselves with the world. Our work can speak volumes about ourselves. Every time we decide to be productive and take action to complete something, we are embracing our identity and who we are. Being able to choose our efforts and be who we want to be is a rewarding feeling. However, it is also essential to ensure you are doing it for yourself and are not trying to meet someone else’s expectations of you. For example, some younger kids will play sports that they hate to ensure the happiness of their parents. The kids are doing it for their parents, rather than themselves. What happens when you don’t do it for yourself is twofold; First, you become dependent on someone else’s validation. 


Apple and Meta shared data with hackers pretending to be law enforcement officials

Apple and Meta handed over user data to hackers who faked emergency data request orders typically sent by law enforcement, according to a report by Bloomberg. The slip-up happened in mid-2021, with both companies falling for the phony requests and providing information about users’ IP addresses, phone numbers, and home addresses. Law enforcement officials often request data from social platforms in connection with criminal investigations, allowing them to obtain information about the owner of a specific online account. While these requests require a subpoena or search warrant signed by a judge, emergency data requests don’t — and are intended for cases that involve life-threatening situations. Fake emergency data requests are becoming increasingly common, as explained in a recent report from Krebs on Security. During an attack, hackers must first gain access to a police department’s email systems. The hackers can then forge an emergency data request that describes the potential danger of not having the requested data sent over right away, all while assuming the identity of a law enforcement official. 


New algorithm could be quantum leap in search for gravitational waves

Grover's algorithm, developed by computer scientist Lov Grover in 1996, harnesses the unusual capabilities and applications of quantum theory to make the process of searching through databases much faster. While quantum computers capable of processing data using Grover's algorithm are still a developing technology, conventional computers are capable of modeling their behavior, allowing researchers to develop techniques which can be adopted when the technology has matured and quantum computers are readily available. The Glasgow team are the first to adapt Grover's algorithm for the purposes of gravitational wave search. In the paper, they demonstrate how they have applied it to gravitational wave searches through software they developed using the Python programming language and Qiskit, a tool for simulating quantum computing processes. The system the team developed is capable of a speed-up in the number of operations proportional to the square-root of the number of templates. Current quantum processors are much slower at performing basic operations than classical computers, but as the technology develops, their performance is expected to improve.


ID.me and the future of biometric zero trust architecture

Although poorly executed and architected, ID.Me and the IRS were on the right path: biometrics is a great way to verify identity and provides a way to deter fraud. But the second part, the part they missed, is that biometrics only fights fraud if it is deployed in a way that preserves user privacy and doesn’t itself become a new data source to steal. Personal data fraud has become the seemingly unavoidable penalty for the convenience of digital services. According to consumer reporting agency Experian, fraud has increased 33 percent over the past two years, with fraudulent credit card applications being one of the main infractions. Cisco’s 2021 Cybersecurity Threat Trends report finds that at least one person clicked a phishing link in 86 percent of organizations and that phishing accounts for 90 percent of data breaches. It’s hard not to think that storing personal and biometric data of the entire United States tax-paying population in one database wouldn’t become a catalyst for the mother of all data breaches.


GitOps Workflows and Principles for Kubernetes

In essence, GitOps uses the advantages of Git with the practicality and reliability of DevOps best practices. By utilizing things like version control, collaboration and compliance and applying them to infrastructure, teams are using the same approach for infrastructure management as they do for software code, enabling greater collaboration, release speed and accuracy. ... Just like Kubernetes, GitOps is declarative. Git declares the desired state, while GitOps works to achieve and maintain that state; As mentioned above, GitOps creates a single source of truth because everything—from your app code to cluster configurations—is stored, versioned and controlled in Git. GitOps focuses on automation; The approved desired state can be automatically applied and does not require hands-on intervention. Having built-in automated environment testing (the same way you test app code) leverages a familiar workflow used in other places to ensure software quality initiatives are being met before merging to production; GitOps is, in a way, self-regulating. If the application deviates from the desired state, an alert can be raised.


Running legacy systems in the cloud: 3 strategies for success

Teams are capable of learning, but may not be familiar with cloud at the onset of the project. This impacts not only the initial migration but also Day 2 operations and beyond, especially given the velocity of change and new features that the hyperscale platforms — namely Amazon Web Services, Google Cloud Platform, and Microsoft Azure — roll out on a continuous basis. Without the necessary knowledge and experience, teams struggle to optimize their legacy system for cloud infrastructure and resources — and then don’t attain the full capabilities of these platforms. ... No one gains a competitive advantage from worrying about infrastructure these days; they win with a laser focus on transforming their applications and their business. That’s a big part of cloud’s appeal – it allows companies to do just that because it effectively takes traditional infrastructure concerns off their plates. You can then shift your focus to business impacts of the new technologies at your disposal, such as the ability to extract data from a massive system like SAP and integrate with best-of-breed data analytics tooling for new insights.



Quote for the day:

"A friend of mine characterizes leaders simply like this : "Leaders don't inflict pain. They bear pain." -- Max DePree

Daily Tech Digest - April 01, 2022

Verification Scans or Automated Security Requirements: Which Comes First?

Testing for weaknesses after code is written is reactive. A better approach is to anticipate weaknesses before code is written and assign mitigation controls as part of the development process. This is accomplished through security requirements. Just as functional requirements provide teams with information on the features and performance needed in a project, security requirements provide teams with required controls to mitigate risk from potential weaknesses before coding begins. Most of these weaknesses are predictable based on the regulatory requirements in scope for the application along with the language, framework, and deployment environment. By translating these into mitigation controls — actionable tasks to be implemented by product development teams, security and operations during the normal development process — teams can build more secure software and avoid much of the “find and fix” delays they currently endure. With complete security requirements and appropriate mitigation controls as part of the overall project requirements, security is built-in as the application is developed.


Software Engineers vs. Full-Stack Developers: 4 Key Differences

Both full-stack developers and software engineers must have a detailed knowledge of coding languages. But full-stack developers tend to require a broader knowledge of more advanced languages than a software engineer. This is because of the range of areas they work across, from front-end development and core application to back-end development. A full-stack developer’s responsibilities include designing user interfaces or managing how an app functions, among other e-commerce development essentials. But they’ll also work on back-end support for the app, as well as manage databases and security. With such a varied list of responsibilities, full-stack development often means overseeing a portfolio of technology, reacting to needs with agility, and switching from one area to another as and when required. A software engineer has a narrower, although no less skilled remit. As well as their essential software development, they test for and resolve programming errors, diving back into the code in order to debug and often using QA automation to speed up testing.


Low-code speeds up development time, but what about testing time?

Test debt is exactly what it sounds like. Just like when you cannot pay your credit card bill, when you cannot test your applications, the problems that are not being found in the application continue to compound. Eliminating test debt requires first establishing a sound test automation approach. Using this an organization can create a core regression test suite for functional regression and an end-to-end test automation suite for end-to-end business process regression testing. Because these are automated tests they can be run as often as code is modified. These tests can also be run concurrently, reducing the time it takes to run these automated tests and also creating core regression test suites. According to Rao, using core functional regression tests and end-to-end regression tests are basic table stakes in an organization’s journey to higher quality. Rao explained that when getting started with test automation, it can seem like a daunting task, and a massive mountain that needs climbing. “You cannot climb it in one shot, you have to get to the base camp. And the first base camp should be like a core regression test suite, that can be achieved in a couple of weeks, because that gives them a significant relief,” he said.


Scaling and Automating Microservice Testing at Lyft

Lyft built its service mesh using Envoy, ensuring that all traffic flows through Envoy sidecars. When a service is deployed, it is registered in the service mesh, becomes discoverable, and starts serving requests from the other services in the mesh. An offloaded deployment contains metadata that stops the control plane from making it discoverable. Engineers create offloaded deployments directly from their pull requests by invoking a specialised GitHub bot. Using Lyft's proxy application, they can add protobuf-encoded metadata to requests as OpenTracing baggage. This metadata is propagated across all services throughout the request's lifetime regardless of the service implementation language, request protocol or queues in between. The Envoy's HTTP filter was modified to support staging overrides and route the request to the offloaded instance based on the request's override metadata. Engineers also used Onebox environments to run integration tests via CI. As the number of microservices increased, so did the number of tests and their running time. Conversely, its efficacy diminished for the same reasons that led to Onebox's abandonment.


How decentralised finance is 'DeFi-ying' the norm

The DeFi sector has, to date, been based on the distributed ledger principle of “trustlessness”, whereby users replace trust in an economic relationship with an algorithm. DeFi is oversaturated with trustless applications, says Sidney Powell, CEO and co-founder of Maple Finance. This includes over-collateralised lending, whereby borrowers put up assets worth two or three times the loan value, as well as decentralised exchanges and yield aggregators, which put your money into a smart contract that searches for the best yield from other smart contracts. “I think the opportunities are in areas where there is a bit of human communication in transacting or using the protocol,” Powell says. Maple’s model, which requires no collateral when it matches lenders with institutional borrowers, requires applications to be vetted and underwritten by experienced humans rather than code. From that point on, however, it is based on transparency – lenders monitor who is borrowing, the current lending strategy and pool performance in real time. 


Google tests its Privacy Sandbox and unveils new user controls

The Google Privacy Sandbox initiative is advancing in tandem with the growth of the global data privacy software market, which researchers valued at $1.68 billion in 2021, and anticipate will reach $25.85 billion by 2029 as more organizations attempt to get to grips with international data protection laws. Google isn’t the only big tech provider attempting to innovate new solutions to combat the complexity of data protection regulations. Meta’s engineers recently shared some of the techniques the organization uses to minimize the amount of data it collects on customers, including its Anonymous Credentials Service (ACS), which enables the organization to authenticate users in a de-identified manner without processing any personally identifiable information. Likewise, just a year ago, Apple released the App Tracking Transparency (ATT) framework as part of iOS 14, which forces Apple developers to ask users to opt-in to cross-app tracking. Google Privacy Sandbox Initiative’s approach stands out because it gives users more transparency over the type of information collected on them, while giving them more granular controls to remove interest-based data at will.


Upcoming Data Storage Technologies to Keep an Eye On

Technology, deployment model, and cross-industry issues are all contributing to the evolution of data storage, according to Tong Zhang, a professor at the Rensselaer Polytechnic Institute, as well as co-founder and chief scientist for ScaleFlux. An uptick in new technologies and further acceleration in data generation growth are also moving storage technologies forward. Deployment models for compute and storage must evolve as edge, near-edge, and IoT devices change the landscape of IT infrastructure landscape, he says. “Cross-industry issues, such as data security and environmental impact / sustainability, are also major factors driving data storage changes.” Four distinct factors are currently driving the evolution in storage technology: cost, capacity, interface speeds, and density, observes Allan Buxton, director of forensics at data recovery firm Secure Data Recovery Services. Hard disk manufacturers are competing with solid-state drive (SSD) makers by decreasing access and seek times while offering higher storage capacities at a lower cost, he explains. 


JavaScript security: The importance of prioritizing the client side

In terms of the dangers, if an organization becomes the victim of a client-side attack, they may not know it immediately, particularly if they’re not using an automated monitoring and inspection security solution. Sometimes it is an end-user victim (like a customer) that finds out first, when their credit card or PII has been compromised. The impact of these types of client-side attacks can be severe. If the organization has compliance or regulatory concerns, then investigations and significant fines could result. Other impacts include costs associated with attack remediation, operational delays, system infiltration, and the theft of sensitive credentials or customer data. There are long-term consequences, as well, such as reputation damage and lost customers.  ... Compliance is also a major concern. Regulatory mandates like GDPR and HIPAA, as well as regulations specific to the financial sector, mean that governments are putting a lot of pressure on organizations to keep sensitive user information safe. Failing to do so can mean investigations and substantial fines.


Lock-In in the Age of Cloud and Open Source

The cloud can be thought of in three layers: Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). While IaaS can be thought of as renting hardware in the cloud, PaaS and SaaS need to be thought of in a completely different way (Hardware 1.0 vs. Hardware 2.0). Migrating between services for IaaS is relatively straightforward, and a buyer is fairly well protected from vendor lock-in. Services higher up the stack, not so much. It remains to be seen if the cloud providers will actually win in the software world, but they are definitely climbing up the stack, just like the original hardware vendors did, because they want to provide stickier solutions to their customers. Let’s explore the difference between these lower-level and higher-level services from a vendor lock-in perspective. With what I call Hardware 2.0, servers, network and storage are rented in the cloud and provisioned through APIs. The switching costs of migrating virtual machines from one cloud provider equate to learning a new API for provisioning.


What is autonomous AI? A guide for enterprises

Autonomous artificial intelligence is defined as routines designed to allow robots, cars, planes and other devices to execute extended sequences of maneuvers without guidance from humans. The revolution in artificial intelligence (AI) has reached a stage when current solutions can reliably complete many simple, coordinated tasks. Now the goal is to extend this capability by developing algorithms that can plan ahead and build a multistep strategy for accomplishing more. Thinking strategically requires a different approach than many successful well-known applications for AI. Machine vision or speech recognition algorithms, for instance, focus on a particular moment in time and have access to all of the data that they might need. Many applications for machine learning work with training sets that cover all possible outcomes. ... Many autonomous systems are able to work quite well by simplifying the environment and limiting the options. For example, autonomous shuttle trains have operated for years in amusement parks, airports and other industrial settings. 



Quote for the day:

"Leadership is about change... The best way to get people to venture into unknown terrain is to make it desirable by taking them there in their imaginations." -- Noel Tichy