Showing posts with label NLP. Show all posts
Showing posts with label NLP. Show all posts

Daily Tech Digest - August 18, 2022

How Productivity And Surveillance Technology Can Create A Crisis For Businesses

“The use of productivity and surveillance technology can create crisis situations for companies and organizations due to the fact that they are not always clear on what they are getting into,” according to Jeff Colt, founder and CEO of Aquarium Fish City, an aquarium and aquatic website. “Companies oftentimes do not fully understand the ramifications of using these tools. For example, if a company decides to implement surveillance technology in the workplace, it needs to make sure that it is not violating any laws. Additionally, it needs to make sure that it is not infringing on any employee rights or privacy rights,” he said in a statement. “The use of productivity and surveillance technology can also create crisis situations because some people may not be comfortable with being monitored by their employers. This could lead some employees to feel like they are being treated unfairly as well as causing them to quit their jobs altogether,” Colt noted. ... “The use of these technologies can have the opposite intended effect when not managed properly,” Natalia Morozova, managing partner at Cohen, Tucker & Ades, an immigration law firm.


The benefits of regenerative architecture and unlocking the data potential in buildings

Regenerative architecture is “architecture that focuses on conservation and performance through a focused reduction on the environmental impacts of a building.” It can allow buildings to generate their own electricity and provides structures to sell excess energy back to the grid, creating a comprehensive, self-sustaining prosumer architecture. By producing their own energy through solar and wind turbines, these buildings significantly lower their carbon emissions and have more resilience in the face of extreme weather events. Some can even reverse environmental damage. But to fully leverage these opportunities, building owners and facility managers need smarter control of their energy. The right data, insights, and control help to make fast decisions and act on them. This is possible through the power digitalization of buildings. Buildings are responsible for 40% of the world’s CO2 emissions, second only to manufacturing. Yet, 30% of energy in buildings is wasted, often heating, cooling, and lighting empty spaces.


Quantum Physics Could Finally Explain Consciousness, Scientists Say

The existence of free will as an element of consciousness also seems to be a deeply non-deterministic concept. Recall that in mathematics, computer science, and physics, deterministic functions or systems involve no randomness in the future state of the system; in other words, a deterministic function will always yield the same results if you give it the same inputs. Meanwhile, a nondeterministic function or system will give you different results every time, even if you provide the same input values. “I think that’s why cognitive sciences are looking toward quantum mechanics. In quantum mechanics, there is room for chance,” Danielsson tells Popular Mechanics. “Consciousness is a phenomenon associated with free will and free will makes use of the freedom that quantum mechanics supposedly provides.” However, Jeffrey Barrett, chancellor’s professor of logic and philosophy of science at the University of California, Irvine, thinks the connection is somewhat arbitrary from the cognitive science side.


Eclypsium calls out Microsoft over bootloader security woes

The malicious shell activity involves visual elements that could potentially be detected by users on workstation monitors during the boot process; however, the vulnerabilities are especially dangerous for servers and industrial control systems that lack displays. The third vulnerability, CVE-2022-34302, is even harder to detect, as exploitation would remain virtually invisible to system owners. The researchers discovered that the New Horizon DataSys bootloader contains a small file that acts as a built-in bypass for Secure Boot; the 73 KB file disables the Secure Boot check without turning the protocol off completely, and it also has the ability to execute additional bypasses for security handlers. The discovery of the Horizon DataSys built-in Secure Boot bypass was definitely a "holy crap moment," Shkatov told SearchSecurity. The researchers said admin access is required for full exploitation, but they demonstrated an exploit during the presentation that used a phishing email and a malicious Word document that elevated their privileges to admin. 


Things You Should Know About Artificial Intelligence and Design

Nearly anyone who lives in the modern world produces data, often on the order of terabytes per day. We text our friends, stream videos, use fitness apps, ask Siri about the weather while we look out the window, walk by CCTV cameras, and the list goes on. Most of these data are unstructured, i.e. not organized in any clear order. Machine learning provides a way for computers to glean meaning from this lack of structure. As Armstrong puts it, “even now as you read, computers sift and categorize your data trails—both unstructured and structured — plunging deeper into who you are and what makes you tick.” How does it do this? The short answer is algorithms, statistical analysis, and prediction. Not sure what any of those words mean? ... As a researcher dedicated to demystifying emerging technology for landscape architects, I believe it is vital we get designers of all demographics and digital abilities to a shared understanding of what AI is so we can all better facilitate its continued permeation into practice. Big Data. Big Design. does this is in spades.


The effect of digital transformation on the CIO job

The CIO has always been a super-important role. I'd liken it [in the past] to the role of a flight engineer. You can't take off if the flight engineer is not on board; he or she serves a super-important purpose – it's mission critical, it's a lights-on operation. It's about delivering a really important capability: to keep the engine, the plane running, in this case, the enterprise running. We're seeing a big change happen because with digital transformation -- and using technology to deliver a new business value proposition -- the world is now starting to center around digital. And the role of the CIO is changing because he or she's now more and more becoming the pilot or the co-pilot, helping colleagues and their stakeholders and the rest of the executive committee to really reimagine the business value proposition on the back of new technology. And so that's one big change that we're going through because the [CIO] seat at the table, the role of the individual, is completely changing. I think another thing that's happening is that tech is no longer the long pole in the tent. And what I mean by that is when you do digital transformation, it isn't just the tech, it's the data. 


How Can Clinical Trials Benefit From Natural language processing (NLP)?

NLP can help identify patterns in participant responses that may indicate whether a treatment is effective. This information can improve the accuracy of trial results and make better decisions about which treatments to pursue. In addition, NLP can help researchers understand why certain participants respond well or poorly to a cure. This knowledge can help develop more effective treatments in the future. Several different NLP tools can be used in clinical trials. The most commonly used tools include machine learning algorithms, text mining techniques, and Word2Vec models. Each has advantages and disadvantages. Therefore, it’s crucial to pick the appropriate equipment for the job. Fortunately, many software platforms provide pre-built libraries that make it easy to use NLP in your research projects. Natural language processing (NLP) has significantly impacted clinical trials by helping researchers identify patterns in participant feedback. This has allowed for more informed decisions about modifying or improving treatments. 


New neuromorphic chip for AI on the edge, at a small fraction of the energy and size of today's computing platforms

The key to NeuRRAM's energy efficiency is an innovative method to sense output in memory. Conventional approaches use voltage as input and measure current as the result. But this leads to the need for more complex and more power hungry circuits. In NeuRRAM, the team engineered a neuron circuit that senses voltage and performs analog-to-digital conversion in an energy efficient manner. This voltage-mode sensing can activate all the rows and all the columns of an RRAM array in a single computing cycle, allowing higher parallelism. In the NeuRRAM architecture, CMOS neuron circuits are physically interleaved with RRAM weights. It differs from conventional designs where CMOS circuits are typically on the peripheral of RRAM weights.The neuron's connections with the RRAM array can be configured to serve as either input or output of the neuron. This allows neural network inference in various data flow directions without incurring overheads in area or power consumption. This in turn makes the architecture easier to reconfigure.


Monoliths to Microservices: 4 Modernization Best Practices

Surveys have shown that the days of manually analyzing a monolith using sticky notes on whiteboards take too long, cost too much and rarely end in success. Which architect or developer in your team has the time and ability to stop what they’re doing to review millions of lines of code and tens of thousands of classes by hand? Large monolithic applications need an automated, data-driven way to identify potential service boundaries. ... When everything was in the monolith, your visibility was somewhat limited. If you’re able to expose the suggested service boundaries, you can begin to make decisions and test design concepts — for example, identifying overlapping functionality in multiple services. ... We all know that naming things is hard. When dealing with monolithic services, we can really only use the class names to figure out what is going on. With this information alone, it’s difficult to accurately identify which classes and functionality may belong to a particular domain. ... What qualities suggest that functionality previously contained in a monolith deserves to be a microservice?


PC store told it can't claim full cyber-crime insurance after social-engineering attack

According to Chief District Judge Patrick Schiltz, who handed down the order, this case treads somewhat new legal ground. In the opinion, Schiltz noted that both SJ's lawsuit and Travelers' dismissal motion only cite three other cases, all from different jurisdictions, that "analyze the concept of direct causation in the context of computer or social-engineering fraud." All of those cases had a major difference in common, the court pointed out – none of them involved insurance policies that cover both computer and social engineering fraud, or make clear that the two types of fraud are different, mutually exclusive categories. This case, therefore, is less of a litmus test for the future of legal disagreements around social engineering insurance payouts, and more an examination of a close reading of contracts. "[Travelers'] Policy clearly anticipates – and clearly addresses – precisely the situation that gave rise to SJ Computers' loss, and the Policy bends over backwards to make clear that this situation involves social-engineering fraud, not computer fraud," Schiltz said.



Quote for the day:

"People only bring up your past when they are intimidated by your present." -- Joubert Botha

Daily Tech Digest - May 17, 2022

Only DevSecOps can save the metaverse

We’ve previously talked about “shifting left,” or DevSecOps, the practice of making security a “first-class citizen” when it comes to software development, baking it in from the start rather than bolting it on in runtime. Log4j, SolarWinds, and other high-profile software supply chain attacks only underscore the importance and urgency of shifting left. The next “big one” is inevitably around the corner. A more optimistic view is that far from highlighting the failings of today’s development security, the metaverse might be yet another reckoning for DevSecOps, accelerating the adoption of automated tools and better security coordination. If so, that would be a huge blessing to make up for all the hard work. As we continue to watch the rise of the metaverse, we believe supply chain security should take center stage and organizations will rally to democratize security testing and scanning, implement software bill of materials (SBOM) requirements, and increasingly leverage DevSecOps solutions to create a full chain of custody for software releases to keep the metaverse running smoothly and securely.


EU Parliament, Council Agree on Cybersecurity Risk Framework

"The revised directive aims to remove divergences in cybersecurity requirements and in implementation of cybersecurity measures in different member states. To achieve this, it sets out minimum rules for a regulatory framework and lays down mechanisms for effective cooperation among relevant authorities in each member state. It updates the list of sectors and activities subject to cybersecurity obligations, and provides for remedies and sanctions to ensure enforcement," according to the Council of the EU. The directive will also establish the European Union Cyber Crises Liaison Organization Network, EU-CyCLONe, which will support the coordinated management of large-scale cybersecurity incidents. The European Commission says that the latest framework is set up to counter Europe's increased exposure to cyberthreats. The NIS2 directive will also cover more sectors that are critical for the economy and society, including providers of public electronic communications services, digital services, waste water and waste management, manufacturing of critical products, postal and courier services and public administration, both at a central and regional level.


Catalysing Cultural Entrepreneurship in India

What constitutes CCIs varies across countries depending on their diverse cultural resources, know-how, and socio-economic contexts. A commonly accepted understanding of CCIs comes from the United Nations Educational, Scientific and Cultural Organization (UNESCO), which defines this sector as “activities whose principal purpose is production or reproduction, promotion, distribution or commercialisation of goods, services, and activities of a cultural, artistic, or heritage-related nature.”, CCIs play an important role in a country’s economy: they offer recreation and well-being, while spurring innovation and economic development at the same time. First, a flourishing cultural economy is a driver of economic growth as attaching commercial value to cultural products, services, and experiences leads to revenue generation. These cultural goods and ideas are also contributors to international trade. Second, although a large workforce in this space is informally organised and often unaccounted for in official labour force statistics, cultural economies are some of the biggest employers of artists, craftspeople, and technicians. 


Rethinking Server-Timing As A Critical Monitoring Tool

Server-Timing is uniquely powerful, because it is the only HTTP Response header that supports setting free-form values for a specific resource and makes them accessible from a JavaScript Browser API separate from the Request/Response references themselves. This allows resource requests, including the HTML document itself, to be enriched with data during its lifecycle, and that information can be inspected for measuring the attributes of that resource! The only other header that’s close to this capability is the HTTP Set-Cookie / Cookie headers. Unlike Cookie headers, Server-Timing is only on the response for a specific resource where Cookies are sent on requests and responses for all resources after they’re set and unexpired. Having this data bound to a single resource response is preferable, as it prevents ephemeral data about all responses from becoming ambiguous and contributes to a growing collection of cookies sent for remaining resources during a page load.


Scalability and elasticity: What you need to take your business to the cloud

At a high level, there are two types of architectures: monolithic and distributed. Monolithic (or layered, modular monolith, pipeline, and microkernel) architectures are not natively built for efficient scalability and elasticity — all the modules are contained within the main body of the application and, as a result, the entire application is deployed as a single whole. There are three types of distributed architectures: event-driven, microservices and space-based. ... For application scaling, adding more instances of the application with load-balancing ends up scaling out the other two portals as well as the patient portal, even though the business doesn’t need that. Most monolithic applications use a monolithic database — one of the most expensive cloud resources. Cloud costs grow exponentially with scale, and this arrangement is expensive, especially regarding maintenance time for development and operations engineers. Another aspect that makes monolithic architectures unsuitable for supporting elasticity and scalability is the mean-time-to-startup (MTTS) — the time a new instance of the application takes to start. 


Proof of Stake and our next experiments in web3

Proof of Stake is a next-generation consensus protocol to secure blockchains. Unlike Proof of Work that relies on miners racing each other with increasingly complex cryptography to mine a block, Proof of Stake secures new transactions to the network through self-interest. Validator's nodes (people who verify new blocks for the chain) are required to put a significant asset up as collateral in a smart contract to prove that they will act in good faith. For instance, for Ethereum that is 32 ETH. Validator nodes that follow the network's rules earn rewards; validators that violate the rules will have portions of their stake taken away. Anyone can operate a validator node as long as they meet the stake requirement. This is key. Proof of Stake networks require lots and lots of validators nodes to validate and attest to new transactions. The more participants there are in the network, the harder it is for bad actors to launch a 51% attack to compromise the security of the blockchain. To add new blocks to the Ethereum chain, once it shifts to Proof of Stake, validators are chosen at random to create new blocks (validate).


Is NLP innovating faster than other domains of AI

There have been several stages in the evolution of the natural language processing field. It started in the 80s with the expert system, moving on to the statistical revolution, to finally the neural revolution. Speaking of the neural revolution, it was enabled by the combination of deep neural architectures, specialised hardware, and a large amount of data. That said, the revolution in the NLP domain was much slower than other fields like computer vision, which benefitted greatly from the emergence of large scale pre-trained models, which, in turn, were enabled by large datasets like ImageNet. Pretrained ImageNet models helped in achieving state-of-the-art results in tasks like object detection, human pose estimation, semantic segmentation, and video recognition. They enabled the application of computer vision to domains where the number of training examples is small, and annotation is expensive. One of the most definitive inventions in recent times was the Transformers. Developed at Google Brains in 2017, Transformers is a novel neural network architecture and is based on the concept of the self-attention mechanism. The model outperformed both recurrent and convolutional models. 

Before you get too excited about Power Query in Excel Online, though, remember one important difference between it and a Power BI report or a paginated report. In a Power BI report or a paginated report, when a user views a report, nothing they do – slicing, dicing, filtering etc – affects or is visible to any other users. With Power Query and Excel Online however you’re always working with a single copy of a document, so when one user refreshes a Power Query query and loads data into a workbook that change affects everyone. As a result, the kind of parameterised reports I show in my SQLBits presentation that work well in desktop Excel (because everyone can have their own copy of a workbook) could never work well in the browser, although I suppose Excel Online’s Sheet View feature offers a partial solution. Of course not all reports need this kind of interactivity and this does make collaboration and commenting on a report much easier; and when you’re collaborating on a report the Show Changes feature makes it easy to see who changed what.


Observability Powered by SQL: Understand Your Systems Like Never Before With OpenTelemetry Traces and PostgreSQL

Given that observability is an analytics problem, it is surprising that the current state of the art in observability tools has turned its back on the most common standard for data analysis broadly used across organizations: SQL. Good old SQL could bring some key advantages: it’s surprisingly powerful, with the ability to perform complex data analysis and support joins; it’s widely known, which reduces the barrier to adoption since almost every developer has used relational databases at some point in their career; it is well-structured and can support metrics, traces, logs, and other types of data (like business data) to remove silos and support correlation; and finally, visualization tools widely support it. ... You're probably thinking that observability data is time-series data that relational databases struggle with once you reach a particular scale. Luckily, PostgreSQL is highly flexible and allows you to extend and improve its capabilities for specific use cases. TimescaleDB builds on that flexibility to add time-series superpowers to the database and scale to millions of data points per second and petabytes of data.


Why cyber security can’t just say “no“

Ultimately, IT security is all about keeping the company safe from damages — financial damages, operational damages, reputational and brand damages. You’re trying to prevent a situation that not only will harm the company’s well-being, but also that of its employees. That is why we need to explain the actual threats and how incidents occur. Explain what steps can be taken to lower the chances and impact of those incidents occurring and show them how they can be part of that. People love learning new things, especially if it has something to do with their daily work. Explain the tradeoffs that are being made, at least in high-level terms. Explain how quickly convenience, such as running a machine as an administrator, can lead to abuse. Not only will the companies appreciate you for your honesty, but they will have the right answer the next time the question comes up. They’ll think along the constraints and find new ways of adding value to the business, while removing factors from their daily work that might result in one less incident down the line.



Quote for the day:

"Real leadership is being the person others will gladly and confidently follow." -- John C. Maxwell

Daily Tech Digest - April 19, 2022

So you're thinking about migrating to Linux? Here's what you need to know

The Linux desktop is so easy. It really is. Developers and designers of most distributions have gone out of their way to ensure the desktop operating system is easy to use. During those early years of using Linux, the command line was an absolute necessity. Today? Not so much. In fact, Linux has become so easy and user-friendly, that you could go your entire career on the desktop and never touch the terminal window. That's right, Linux of today is all about the GUI and the GUIs are good. If you can use macOS or Windows, you can use Linux. It doesn't matter how skilled you are with a computer, Linux is a viable option. In fact, I'd go so far to say that the less skill you have with a computer the better off you are with Linux. Why? Linux is far less "breakable" than Windows. You really need to know what you're doing to break a Linux system. One very quick way to start an argument within the Linux community is to say Linux isn't just a kernel. In a similar vein, a very quick way to confuse a new user is to tell them that Linux is only the kernel. ... Yes, Linux uses the Linux kernel. All operating systems have a kernel, but you don't ever hear Windows or macOS users talk about which kernel they use.


Purpose is a two-way street

There’s a broader redefinition of purpose that’s underway both for organizations and individuals. Today, people don’t have just one single career in a lifetime but five or six—and their goals and purpose vary at each stage. At the same time, organizations can’t address or engage with the broad range of stakeholders they deal with through just one single purpose. In combination, these shifts are ushering in the concept of purpose as a “cluster” of goals and experiences, with different aspects resonating with different stakeholders at different times. The same cluster concept holds true for career paths. It is vital to expand the conversation about the varied, unique options people have to fulfill their goals. Companies must strive to make those options more transparent, more individualized, and more flexible, and less linear. For today’s employees, the point of a career path is not necessarily to climb a ladder with a particular end-state in mind but to gain experience and pursue the individual’s purpose—a purpose that may shift and evolve over time. To that end, it may make sense for organizations to create paths that allow employees to move within and across, and even outside, an organization—not just up—to achieve their goals.


How algorithmic automation could manage workers ethically

Mewies says bias in automated systems generates significant risks for employers that use them to select people for jobs or promotion, because it may contravene anti-discrimination law. For projects involving systemic or potentially harmful processing of personal data, organisations have to carry out a privacy impact assessment, she says. “You have to satisfy yourself that where you were using algorithms and artificial intelligence in that way, there was going to be no adverse impact on individuals.” But even when not required, undertaking a privacy impact assessment is a good idea, says Mewies, adding: “If there was any follow-up criticism of how a technology had been deployed, you would have some evidence that you had taken steps to ensure transparency and fairness.” ... Antony Heljula, innovation director at Chesterfield-based data science consultancy Peak Indicators, says data models can exclude sensitive attributes such as race, but this is far from foolproof, as Amazon showed a few years ago when it built an AI CV-rating system trained on a decade of applications, to find that it discriminated against women.


The changing role of the CCO: Champion of innovation and business continuity

The best CCOs partner with the business to really understand how to place gates and controls that mitigate risk, while still allowing the business to operate at maximum efficiency. One area of the business that is particularly valuable is the IT department, which can help CCOs to maintain and provide systematic proof of both adherence to internal policies and the external laws, guidelines or regulations imposed upon the company. By having a dedicated IT resource, CCOs do not have to wait for the next programme increment (PI), sprint planning or IT resourcing availability. Instead, they can be agile and proactive when it comes to meeting business growth and revenue objectives. Technical resourcing can be utilised for project governance, systems review, data science, AML and operational analytics, as well as support audit / reporting with internal / external stakeholders, investors, regulators, creditors and partners. Ultimately this partnership between IT and CCOs will allow a business to make data-driven decisions that meet compliance as well corporate growth mandates.


IT Admins Need a Vacation

An unhappy sysadmin can breed apathy, and an apathetic attitude is especially problematic when sysadmins are responsible for cybersecurity. Even in organizations where cybersecurity and IT are separate,sysadmins affect cybersecurity in some way, whether it’s through patching, performing data backups, or reviewing logs. This problem is industry-wide, and it will take more than just one person to solve it, but I’m in a unique position to talk about it. I’ve held sysadmin roles, and I’m the co-founder and CTO of a threat detection and response company in which I oversee technical operations. One of my top priorities is building solutions that won’t tip over and require significant on-call support. The tendency to paper over a problem with human effort 24/7 is a tragedy in the IT space and should be solved with technology wherever possible. As someone who manages employees that are on-call and is still on-call, I need to be in tune with the mental health of my team members and support them to prevent burnout. I need to advocate for my employees to be compensated generously and appreciate and reward them for a job well done.


The steady march of general-purpose databases

Brian Goetz has a funny way of explaining the phenomenon, called Goetz’s Law: “Every declarative language slowly slides towards being a terrible general-purpose language.” Perhaps a more useful explanation comes from Stephen Kell who argues that “the endurance of C is down to its extreme openness to interaction with other systems via foreign memory, FFI, dynamic linking, etc.” In other words, C endures because it takes on more functionality, allowing developers to use it for more tasks. That’s good, but I like Timothy Wolodzko’s explanation even more: “As an industry, we're biased toward general-purpose tools [because it’s] easier to hire devs, they are already widely adopted (because being general purpose), often have better documentation, are better maintained, and can be expected to live longer.” Some of this merely describes the results of network effects, but how general purpose enables those network effects is the more interesting observation. Similarly, one commenter on Bernhardsson’s post suggests, “It's not about general versus specialized” but rather “about what tool has the ability to evolve.


Open-Source NLP Is A Gift From God For Tech Start-ups

As of late, be that as it may, open exploration endeavours like Eleuther AI have brought the boundaries down to the section. The grassroots agency of man-made intelligence analysis, Eleuther AI expects to ultimately convey the code and datasets expected to run a model comparable (however not indistinguishable) to GPT-3. The group has proactively delivered a dataset called ‘The Heap’ that is intended to prepare enormous language models to finish the text and compose code, and that’s just the beginning. (It just so happens, that Megatron 530B was designed along the lines of The Heap.) And in June, Eleuther AI made accessible under the Apache 2.0 permit GPT-Neo and its replacement, GPT-J, a language model that performs almost comparable to an identical estimated GPT-3 model. One of the new companies serving Eleuther AI’s models as assistance is NLP Cloud, which was established a year prior by Julien Salinas, a previous programmer at Hunter.io and the organizer of cash loaning administration StudyLink.fr. 


SQL and Complex Queries Are Needed for Real-Time Analytics

While taking the NoSQL road is possible, it’s cumbersome and slow. Take an individual applying for a mortgage. To analyze their creditworthiness, you would create a data application that crunches data, such as the person’s credit history, outstanding loans and repayment history. To do so, you would need to combine several tables of data, some of which might be normalized, some of which are not. You might also analyze current and historical mortgage rates to determine what rate to offer. With SQL, you could simply join tables of credit histories and loan payments together and aggregate large-scale historic data sets, such as daily mortgage rates. However, using something like Python or Java to manually recreate the joins and aggregations would multiply the lines of code in your application by tens or even a hundred compared to SQL. More application code not only takes more time to create, but it almost always results in slower queries. Without access to a SQL-based query optimizer, accelerating queries is difficult and time-consuming because there is no demarcation between the business logic in the application and the query-based data access paths used by the application.


Lack of expertise hurting UK government’s cyber preparedness

In France, security pros tended to find tender and bidding processes more of an issue, but also cited a lack of trusted partners, budget, and ignorance of cyber among organisational leadership. German responders also faced problems with tendering, and similar problems to both the British and French. From a technological perspective, UK-based respondents cited endpoint detection and response (EDR) and extended detection and response (XDR) and cloud security modernisation as the most mature defensive solutions, with 37% saying they were “fully deployed” in this area. Zero trust tailed with 32%, and multi-factor authentication (MFA) was cited by 31% – Brits tended to think MFA was more difficult than average to implement, as well. The French, on the other hand, are doing much better on MFA, with 47% of respondents claiming full deployment, 35% saying they had fully deployed EDR-XDR, and 33% and 30% saying they had fully implemented cloud security modernisation and zero trust respectively. In contrast to this, the Germans tended to be better on cloud security modernisation, which 40% claimed to have fully implemented, followed by zero trust at 32%, MFA at 30% and EDR-XDR at 27%.


Scrum Master Anti-Patterns

The reasons Scrum Masters violate the spirit of the Scrum Guide are multi-faceted. They run from ill-suited personal traits to pursuing their agendas to frustration with the Scrum team. Some often-observed reasons are: Ignorance or laziness: One size of Scrum fits every team. Your Scrum Master learned the trade in a specific context and is now rolling out precisely this pattern in whatever organization they are active, no matter the context. Why go through the hassle of teaching, coaching, and mentoring if you can shoehorn the “right way” directly into the Scrum team?; Lack of patience: Patience is a critical resource that a successful Scrum Master needs to field in abundance. But, of course, there is no fun in readdressing the same issue several times, rephrasing it probably, if the solution is so obvious—from the Scrum Master’s perspective. So, why not tell them how to do it ‘right’ all the time, thus becoming more efficient? Too bad that Scrum cannot be pushed but needs to be pulled—that’s the essence of self-management; Dogmatism: Some Scrum Masters believe in applying the Scrum Guide literally, which unavoidably will cause friction as Scrum is a framework, not a methodology.



Quote for the day:

"No organization should be allowed near disaster unless they are willing to cooperate with some level of established leadership." -- Irwin Redlener

Daily Tech Digest - February 27, 2022

Oh, Snap! Security Holes Found in Linux Packaging System

The first problem was the snap daemon snapd didn’t properly validate the snap-confine binary’s location. Because of this, a hostile user could hard-link the binary to another location. This, in turn, meant a local attacker might be able to use this issue to execute other arbitrary binaries and escalate privileges. The researchers also discovered that a race condition existed in the snapd snap-confine binary when preparing a private mount namespace for a snap. With this, a local attacker could gain root privileges by bind-mounting their own contents inside the snap’s private mount namespace. With that, they could make snap-confine execute arbitrary code. From there, it’s easy to start privilege escalation for an attacker to try to make it all the way to root. There’s no remote way to directly exploit this. But, if an attacker can log in as an unprivileged user, the attacker could quickly use this vulnerability to gain root privileges. Canonical has released a patch that fixes both security holes. The patch is available in the following supported Ubuntu releases: 21.10; 20.04, and 18.04. A simple system update will fix this nicely.


The DAO is a major concept for 2022 and will disrupt many industries

It is not yet clear where these disruptive technologies will lead us, but we are sure that there will be much value up for grabs. At the convergence of Web3 and NFTs lie many platforms looking to leverage technology and infrastructure to make the NFT ecosystem more decentralized, structured and community-driven. Using both social building and governance, the decentralized autonomous organization disruption is a notch higher. The DAO is one major invention that is challenging current systems of governance. Utilizing NFTs, DAOs are changing our perspective of how organizations and systems should be run, and they put further credence to the idea that the optimal form of governance does not have to do with hierarchical structures. With the principal-agent problem limiting the growth of organizations and preventing agents from feeling like part of a team, you can see why the need for decentralized organizations fostering community-inclusion is paramount. Is there something you would change about your current organization if given the chance? Leadership? 


Use the cloud to strengthen your supply chain

What’s interesting about this process is that it does not entail executives in the C-suites pulling all-nighters to come up with these innovative solutions. It’s 100% automated using huge amounts of data and machine learning and embedding these things directly within business processes so the fix happens seconds after the supply chain problem is found. These aspects of intelligent supply chain automation are not new. For years, there has been some deep thinking in terms of how to automate supply chains more effectively. Those of you who specialize in supply chains understand this far too well. How many companies are willing to invest in the innovation—and even the risk—of leveraging these new systems? Most are not, and they are seeing the downsides from the markets tossing them curveballs that they try to deal with using traditional approaches. We’re seeing companies that have been in 10th place in a specific market move to second or third place by differentiating themselves with these intelligent cloud-based systems.


Open Source Code: The Next Major Wave of Cyberattacks

When it comes to testing the resilience of your open source environment with tools, static code analysis is a good first step. Still, organizations must remember that this is only the first layer of testing. Static analysis refers to analyzing the source code before the actual software application or program goes live and addressing any discovered vulnerabilities. However, static analysis cannot detect all malicious threats that could be embedded in open source code. Additional testing in a sandbox environment should be the next step. Stringent code reviews, dynamic code analysis, and unit testing are other methods that can be leveraged. After scanning is complete, organizations must have a clear process to address any discovered vulnerabilities. Developers may be finding themselves against a release deadline, or the software patch may require refactoring the entire program and put a strain on timelines. This process should help developers address tough choices to protect the organization's security by giving clear next steps for addressing vulnerabilities and mitigating issues.


A guide to document embeddings using Distributed Bag-of-Words (DBOW) model

Beyond practising when things come to the real-world applications of NLP, machines are required to understand what is the context behind the text which surely is longer than just a single word. For example, we want to find cricket-related tweets from Twitter. We can start by making a list of all the words that are related to cricket and then we will try to find tweets that have any word from the list. This approach can work to an extent but what if any tweet related to cricket does not contain words from the list. Let’s take an example of any tweet that contains the name of an Indian cricketer without mentioning that he is an Indian cricketer. In our daily life, we may find many applications and websites like Facebook, twitter, stack overflow, etc which use this approach and fail to obtain the right results for us. To cope with such difficulties we may use document embeddings that basically learn a vector representation of each document from the whole world embeddings. This can also be considered as learning the vector representation in a paragraph setting instead of learning vector representation from the whole corpus.


Great Resignation or Great Redirection?

All this Great Resignation talk has many panicking and being reactive. We definitely shouldn’t ignore it, but we should seek to understand what is happening and why. And what the implications are for the future. The truly historical event is the revolution in how people conceive of work and its relationship to other life priorities. Even within that, there are distinctively different categories. We know service workers in leisure and hospitality got hit disproportionately hard by the pandemic. These people unexpectedly found themselves jobless, unsure how they would pay their bills and survive. Being resilient and hard-working, many — like my Uber driver — found gigs doing delivery, rideshare or other jobs giving greater flexibility and autonomy. These jobs also provided better pay than traditional service roles. Now, with their former jobs calling for their return, this group of workers has the ability to choose for themselves what they want. When Covid displaced office workers to their homes, they were bound to realize it was nice to not have that commute or the road warrior travel.


The post-quantum state: a taxonomy of challenges

While all the data seems to suggest that replacing classical cryptography by post-quantum cryptography in the key exchange phase of TLS handshakes is a straightforward exercise, the problem seems to be much harder for handshake authentication (or for any protocol that aims to give authentication, such as DNSSEC or IPsec). The majority of TLS handshakes achieve authentication by using digital signatures generated via advertised public keys in public certificates (what is called “certificate-based” authentication). Most of the post-quantum signature algorithms currently being considered for standardization in the NIST post-quantum process, have signatures or public keys that are much larger than their classical counterparts. Their operations’ computation time, in the majority of cases, is also much bigger. It is unclear how this will affect the TLS handshake latency and round-trip times, though we have a better insight now in respect to which sizes can be used. We still need to know how much slowdown will be acceptable for early adoption.


An overview of the blockchain development lifecycle

Databases developed with blockchain technologies are notoriously difficult to hack or manipulate, making them a perfect space for storing sensitive data. Blockchain software development requires an understanding of how blockchain technology works. To learn blockchain development, developers must be familiar with interdisciplinary concepts, for example, with cryptography and with popular blockchain programming languages like Solidity. A considerable amount of blockchain development focuses on information architecture, that is, how the database is actually to be structured and how the data to be distributed and accessed with different levels of permissions. ... Determine if the blockchain will include specific permissions for targeted user groups or if it will comprise a permissionless network. Afterward, determine whether the application will require the use of a private or public blockchain network architecture. Also consider the hybrid consortium, or public permissioned blockchain architecture. With a public permissioned blockchain, a participant can only add information with the permission of other registered participants.


How TypeScript Won Over Developers and JavaScript Frameworks

Microsoft’s emphasis on community also extends to developer tooling; another reason the Angular team cited for their decision to adopt the language. Microsoft’s own VS Code naturally has great support for TypeScript, but the TypeScript Language Server provides a common set of editor operations — like statement completions, signature help, code formatting, and outlining. This simplifies the job for vendors of alternative IDEs, such as JetBrains with WebStorm. Ekaterina Prigara, WebStorm project manager at JetBrains, told the New Stack that “this integration works side-by-side with our own support of TypeScript – some of the features of the language support are powered by the server, whilst others, e.g. most of the refactorings and the auto import mechanism, by the IDE’s own support.” The details of the integration are quite complex. Continued Prigara, “Completion suggestions from the server are shown but they could, in some cases, be enhanced with the IDE’s suggestions. It’s the same with the error detection and quick fixes. Formatting is done by the IDE. Inferred types shown on hover, if I’m not mistaken, come from the server. ...”


Developing and Testing Services Among a Sea of Microservices

The first option is to take all of the services that make up the entire application and put them on your laptop. This may work well for a smaller application, but if your application is large or has a large number of services, this solution won’t work very well. Imagine having to install, update, and manage 500, 1,000, or 5,000 services in your development environment on your laptop. When a change is made to one of those services, how do you get it updated? ... The second option solves some of these issues. Imagine having the ability to click a button and deploy a private version of the application in a cloud-based sandbox accessible only to you. This sandbox is designed to look exactly like your production environment. It may hopefully even use the same Terraform configurations to create the infrastructure and get it all connected, but it will use smaller cloud instances and fewer instances, so it won’t cost as much to run. Then, you can link your service running on your laptop to this developer-specific cloud setup and make it look like it’s running in a production environment.



Quote for the day:

"Courage is leaning into the doubts and fears to do what you know is right even when it doesn't feel natural or safe." -- Lee Ellis

Daily Tech Digest - December 16, 2021

The New Face of Data Management

Despite the data explosion, IT organizations haven’t necessarily changed storage strategies. They keep buying expensive storage devices because unassailable performance is required for critical or “hot” data. The reality is that all data is not diamonds. Some of it is emeralds and some of it is glass. By treating all data the same way, companies are creating needless cost and complexity. ... Yet as hot data continues to grow, the backup process becomes sluggish. So, you purchase expensive, top-of-line backup solutions to make this faster, but you still need ever-more storage for all these copies of your data. The ratio of unique data (created and captured) to replicated data (copied and consumed) is roughly 1:9. By 2024, IDC expects this ratio to be 1:10. Most organizations are backing up and replicating data that is in fact rarely accessed and better suited to low-cost archives such as in the cloud. Beyond backup and storage costs, organizations must also secure all of this data. A one-size-fits-all strategy means that all data is secured to the level of the most sensitive, critically important data.


Technology and the future of modern warfare

This digital revolution points to a new kind of hyper-modern warfare. Artificial intelligence is a good example of this. If AI can read more data in a minute than a human can read in a year, then its value to militaries is immeasurable. In a recent interview with The Daily Telegraph, the current Chief of General Staff, General Sir Mark Carleton-Smith, has acknowledged that “we are already seeing the implications of artificial intelligence, quantum computing and robotics, and how they might be applied on the battlefield”. Machine learning, for instance, has already been used to harvest key grains of intelligence from the chaff of trivial information that usually inundates analysts. All this is not to say, however, that there will be a complete obsolescence of traditional equipment and means. The British Army remains an industrial age organisation with an industrial skill set, but one which is confronted by innovation challenges. Conventional threats can still materialise at any time. The recent stationing of Russian troops along the Ukrainian border and within the Crimea – in addition to the manoeuvring of its naval forces in the Sea of Azov – is a case in point.


Attacking Natural Language Processing Systems With Adversarial Examples

The attack can potentially be used to cripple machine learning translation systems by forcing them to either produce nonsense, or actually change the nature of the translation; to bottleneck training of NLP models; to misclassify toxic content; to poison search engine results by causing faulty indexing; to cause search engines to fail to identify malicious or negative content that is perfectly readable to a person; and even to cause Denial-of-Service (DoS) attacks on NLP frameworks. Though the authors have disclosed the paper’s proposed vulnerabilities to various unnamed parties whose products feature in the research, they consider that the NLP industry has been laggard in protecting itself against adversarial attacks. The paper states: ‘These attacks exploit language coding features, such as invisible characters and homoglyphs. Although they have been seen occasionally in the past in spam and phishing scams, the designers of the many NLP systems that are now being deployed at scale appear to have ignored them completely.’


When done right, network segmentation brings rewards

Segmentation is an IT approach that separates critical areas of the network to control east-west traffic, prevent lateral movement, and ultimately reduce the attack surface. Traditionally, this is done via an architectural approach – relying on hardware, firewalls and manual work. This can often prove cumbersome and labor intensive, which is a contributing factor in 82% of respondents saying that network segmentation is a “huge task.” ... Modern segmentation uses a software-based approach that is simpler to use, faster to implement and is able to secure more critical assets. The research shows that organizations that leverage the latest approach to segmentation will realize essential security benefits, like identifying more ransomware attacks and reducing time to mitigate attacks. “The findings of the report demonstrate just how valuable a strong segmentation strategy can be for organizations looking to reduce their attack surface and stop damaging attacks like ransomware,” said Pavel Gurvich, SVP, Akamai Enterprise Security.


Neural networks can hide malware, and scientists are worried

As malware scanners can’t detect malicious payloads embedded in deep learning models, the only countermeasure against EvilModel is to destroy the malware. The payload only maintains its integrity if its bytes remain intact. Therefore, if the recipient of an EvilModel retrains the neural network without freezing the infected layer, its parameter values will change and the malware data will be destroyed. Even a single epoch of training is probably enough to destroy any malware embedded in the DL model. However, most developers use pretrained models as they are, unless they want to fine-tune them for another application. And some forms of finetuning freeze most existing layers in the network, which might include the infected layers. This means that alongside adversarial attacks, data-poisoning, membership inference, and other known security issues, malware-infected neural networks are a real threat to the future of deep learning.


5 Key Skills Needed To Become a Great Data Scientist

Data Scientists should develop the habit of critical thinking. It helps in better understanding the problem. Unless the problems are understood to the most granular level the solution can’t be good. Critical thinking helps in analyzing the different options and helps in choosing the right one. While solving data science problems it is not always a good or bad decision. A lot of options lie in the grey area between good and bad. There are so many decisions involved in a data science project. Like, choosing the right set of attributes, the right methodology, the right algorithms, the right metrics to measure the model performance, and so on. ... Coding skills are as much important to a data scientist as eyes are for an artist. Anything that a data scientist would do requires coding skills. From reading data from multiple sources, performing exploratory analysis on the data, building models, and evaluating them. ... Math is another important skill to be understood by data scientists. It will be OK for you to not be aware of some of the math concepts while learning data science. It will not be possible to excel as a data scientist without understanding the math concepts.


DeepMind Now Wants To Study The Behaviour Of Electrons, Launches An AI Tool

Density functional theory (DFT) describes matter at the quantum level, but popular approximations suffer from systematic errors that have arisen from the violation of mathematical properties of the exact functional. DeepMind has overcome this fundamental limitation by training a neural network on molecular data and on fictitious systems with fractional charge and spin. The result was the DM21 (DeepMind 21) tool. It correctly describes typical examples of artificial charge delocalization and strong correlation and performs better than traditional functionals on thorough benchmarks for main-group atoms and molecules. The company claims that DM21 accurately models complex systems such as hydrogen chains, charged DNA base pairs, and diradical transition states. The tool DM21 is a neural network to achieve the state of the art accuracy on large parts of chemistry and to accelerate scientific progress; the code has been open-sourced.


Are venture capitalists misunderstood?

The arrival of growth investors put traditional VCs under further pressure. Whereas Rock and his successors regularly got involved long before there was a product to market, others eventually realized there were opportunities further down the line, and provided vast amounts of capital to established firms that they believed had the potential to become many times larger. These investors, who included Yuri Milner and Masayoshi Son, were irresistible to ambitious tech companies. Unlike VCs, which demanded equity in exchange for funding, Milner and Son did not even want to sit on the board. Mallaby argues that huge capital injections by growth investors (and the VCs that chose to compete with them) resulted in greater control for entrepreneurs, but also weaker corporate governance and ultimately over-reach and ill-discipline. “Precisely at the point when tech companies achieved escape velocity and founders were apt to feel too sure of themselves,” Mallaby writes, “the usual forms of private or public governance would thus be suspended.”


IT security: 4 issues to watch in 2022

If infosec had a greatest hits album, basic security hygiene would be track one. Year in, year out, the root cause of many security incidents can be traced back to the fundamentals. A wide range of threats, from ransomware to cloud account hijacking to data leakage, owe much of their efficacy to surprisingly simple missteps, from a misconfigured setting (or even a default setting left unchanged) to an over-privileged user to unpatched software. ... This begs the question: What are the basics? Things like password hygiene and system patching apply across the board, but you also need to identify and agree with colleagues on “the basics” required in your specific organization. That gives you a collective standard to work toward and measure against. Moreover, the word “basic” doesn’t do some of the fundamentals justice “In my world, the basics are patch management, secure configuration, threat modeling, DAST and SAST scanning, internal and external vulnerability scanning, penetration testing, defense against phishing attacks, third-party vulnerability assessments, backup and disaster recovery, and bespoke security training,” Elston says.


Growing an Experiment-Driven Quality Culture in Software Development

When people share their thoughts and wishes, there might be different needs underlying them. For example, tech leadership can express their desire for global quantitative metrics, while they might actually need to figure out which impact they want to have in the first place and which information they need to have this impact. Remember the teams falling back to everyday business? The systemic part plays a huge role to consider in your experiments. For example, if you’re setting out to improve the quality culture of a team, think about what kind of behavior contributing to quality gets rewarded and how. If a person does a great job, yet these contributions and the resulting impact are not valued, they probably won’t get promoted for it. The main challenges usually come back to people’s interactions as well as the systems in which we are interacting. This includes building trustful relationships and shaping a safe, welcoming space where people can bring their whole authentic selves and have a chance to thrive. 



Quote for the day:
 
"The strong do what they have to do and the weak accept what they have to accept." -- Thucydides

Daily Tech Digest - December 12, 2021

AWS Among 12 Cloud Services Affected by Flaws in Eltima SDK

USB Over Ethernet enables sharing of multiple USB devices over Ethernet, so that users can connect to devices such as webcams on remote machines anywhere in the world as if the devices were physically plugged into their own computers. The flaws are in the USB Over Ethernet function of the Eltima SDK, not in the cloud services themselves, but because of code-sharing between the server side and the end user apps, they affect both clients – such as laptops and desktops running Amazon WorkSpaces software – and cloud-based machine instances that rely on services such as Amazon Nimble Studio AMI, that run in the Amazon cloud. The flaws allow attackers to escalate privileges so that they can launch a slew of malicious actions, including to kick the knees off the very security products that users depend on for protection. Specifically, the vulnerabilities can be used to “disable security products, overwrite system components, corrupt the operating system or perform malicious operations unimpeded,” SentinelOne senior security researcher Kasif Dekel said in a report published on Tuesday.


Rust in the Linux Kernel: ‘Good Enough’

When we first looked at the idea of Rust in the Linux kernel, it was noted that the objective was not to rewrite the kernel’s 25 million lines of code in Rust, but rather to augment new developments with the more memory-safe language than the standard C normally used in Linux development. Part of the issue with using Rust is that Rust is compiled based on LLVM, as opposed to GCC, and subsequently supports fewer architectures. This is a problem we saw play out when the Python cryptography library replaced some old C code with Rust, leading to a situation where certain architectures would not be supported. Hence, using Rust for drivers would limit the impact of this particular limitation. Ojeda further noted that the Rust for Linux project has been invited to a number of conferences and events this past year, and even garnered some support from Red Hat, which joins Arm, Google, and Microsoft in supporting the effort. According to Ojeda, Red Hat says that “there is interest in using Rust for kernel work that Red Hat is considering.”


DeepMind tests the limits of large AI language systems with 280-billion-parameter model

DeepMind, which regularly feeds its work into Google products, has probed the capabilities of this LLMs by building a language model with 280 billion parameters named Gopher. Parameters are a quick measure of a language’s models size and complexity, meaning that Gopher is larger than OpenAI’s GPT-3 (175 billion parameters) but not as big as some more experimental systems, like Microsoft and Nvidia’s Megatron model (530 billion parameters). It’s generally true in the AI world that bigger is better, with larger models usually offering higher performance. DeepMind’s research confirms this trend and suggests that scaling up LLMs does offer improved performance on the most common benchmarks testing things like sentiment analysis and summarization. However, researchers also cautioned that some issues inherent to language models will need more than just data and compute to fix. “I think right now it really looks like the model can fail in variety of ways,” said Rae.


2022 transformations promise better builders, automation, robotics

The Great Resignation is real, and it has affected the logistics industry more than anyone realizes. People don’t want low-paying and difficult jobs when there’s a global marketplace where they can find better work. Automation will be seen as a way to address this, and in 2022, we will see a lot of tech VC investment in automation and robotics. Some say SpaceX and Virgin can deliver cargo via orbit, but I think that’s ridiculous. What we need, (and what I think will be funded in 2022, are more electric and autonomous vehicles like eVTOL, a company that is innovating the “air mobility” market. According to eVTOL’s website, the U.S. Department of Defense has awarded $6 million to the City of Springfield, Ohio, for a National Advanced Air Mobility Center of Excellence. ... In 2022 transformations, grocery will cease to be an in-store retail experience only, and the sector will be as virtual and digitally-driven as the best of them. Things get interesting when we combine locker pickup, virtual grocery, and automated last-mile delivery using autonomous vehicles that can deliver within a mile of the warehouse or store.


Penetration testing explained: How ethical hackers simulate attacks

In a broad sense, a penetration test works in exactly the same way that a real attempt to breach an organization's systems would. The pen testers begin by examining and fingerprinting the hosts, ports, and network services associated with the target organization. They will then research potential vulnerabilities in this attack surface, and that research might suggest further, more detailed probes into the target system. Eventually, they'll attempt to breach their target's perimeter and get access to protected data or gain control of their systems. The details, of course, can vary a lot; there are different types of penetration tests, and we'll discuss the variations in the next section. But it's important to note first that the exact type of test conducted and the scope of the simulated attack needs to be agreed upon in advance between the testers and the target organization. A penetration test that successfully breaches an organization's important systems or data can cause a great deal of resentment or embarrassment among that organization's IT or security leadership


EV charging in underground carparks is hard. Blockchain to the rescue

According to Bharadwaj, the concrete and steel environment effectively acted as a “Faraday cage,” which meant that the EV chargers wouldn’t talk to people’s mobile phones when they tried to initiate charging. You could find yourself stranded, unable to charge your car. “So we had to innovate.” ... As with any EV charging, a payment app connects your car to the EV charger. With Xeal, the use of NFC means the only time you need the Internet is to download the app in the first instance to create a profile that includes their personal and vehicle information and payment details. You then receive a cryptographic token on your mobile phone that authenticates your identity and enables you to access all of Xeal’s public charging stations. The token is time-bound, which means it dissolves after use. To charge your car, you hold your phone up to the charger. Your mobile reads the cryptographic token, automatically bringing up an NFC scanner. It opens the app, authenticates your charging session, starts scanning, and within milliseconds, the charging session starts.


Top 8 AI and ML Trends to Watch in 2022

The scarcity of skilled AI developers or engineers stands as a major barrier to adopting AI technology in many companies. No-code and low-code technologies come to the rescue. These solutions aim to offer simple interfaces, in theory, to develop highly complex AI systems. Today, web design and no-code user interface (UI) tools let users create web pages simply by dragging and dropping graphical elements together. Similarly, no-code AI technology allows developers to create intelligent AI systems by simply merging different ready-made modules and feeding them industrial domain-specific data. Furthermore, NLP, low-code, and no-code technologies will soon enable us to instruct complex machines with our voice or written instructions. These advancements will result in the “democratization” of AI, ML, and data technologies. ... In 2022, with the aid of AI and ML technologies, more businesses will automate multiple yet repetitive processes that involve large volumes of information and data. In the coming years, an increased rate of automation can be seen in various industries using robotic process automation (RPA) and intelligent business process management software (iBPMS). 


The limitations of scaling up AI language models

Large language models like OpenAI’s GPT-3 show an aptitude for generating humanlike text and code, automatically writing emails and articles, composing poetry, and fixing bugs in software. But the dominant approach to developing these models involves leveraging massive computational resources, which has consequences. Beyond the fact that training and deploying large language models can incur high technical costs, the requirements put the models beyond the reach of many organizations and institutions. Scaling also doesn’t resolve the major problem of model bias and toxicity, which often creeps in from the data used to train the models. In a panel during the Conference on Neural Information Processing Systems (NeurIPS) 2021, experts from the field discussed how the research community should adapt as progress in language models continues to be driven by scaled-up algorithms. The panelists explored how to ensure that smaller institutions and can meaningfully research and audit large-scale systems, as well as ways that they can help to ensure that the systems behave as intended.


Here are three ways distributed ledger technology can transform markets

While firms have narrowed their scope to address more targeted pain points, the increased digitalisation of assets is helping to drive interest in the adoption of DLT in new ways. Previous talk of mass disruption of the financial system has given way to more realistic, but still transformative, discussions around how DLT could open doors to a new era of business workflows, enabling transactional exchanges of assets and payments to be recorded, linked, and traced throughout their entire lifecycle. DLT’s true potential rests with its ability to eliminate traditional “data silos”, so that parties no longer need to build separate recording systems, each holding a copy of their version of “the truth”. This inefficiency leads to time delays, increased costs and data quality issues. In addition, the technology can enhance security and resilience, and would give regulators real-time access to ledger transactions to monitor and mitigate risk more effectively. In recent years, we have been pursuing a number of DLT-based opportunities, helping us understand where we believe the technology can deliver maximum value while retaining the highest levels of risk management.


To identity and beyond—One architect's viewpoint

Simple is often better: You can do (almost) anything with technology, but it doesn't mean you should. Especially in the security space, many customers overengineer solutions. I like this video from Google’s Stripe conference to underscore this point. People, process, technology: Design for people to enhance process, not tech first. There are no "perfect" solutions. We need to balance various risk factors and decisions will be different for each business. Too many customers design an approach that their users later avoid. Focus on 'why' first and 'how' later: Be the annoying 7-yr old kid with a million questions. We can't arrive at the right answer if we don't know the right questions to ask. Lots of customers make assumptions on how things need to work instead of defining the business problem. There are always multiple paths that can be taken. Long tail of past best practices: Recognize that best practices are changing at light speed. 



Quote for the day:

"Eventually relationships determine the size and the length of leadership." -- John C. Maxwell