Device-Edge-Cloud Continuum: Paradigms,
Architectures and Applications 1st Edition
Claudio Savaglio install download
https://ebookmeta.com/product/device-edge-cloud-continuum-
paradigms-architectures-and-applications-1st-edition-claudio-
savaglio/
Download more ebook from https://ebookmeta.com
We believe these products will be a great fit for you. Click
the link to download now, or visit ebookmeta.com
to discover even more!
Applications of Tensor Analysis in Continuum Mechanics
1st Edition Victor A Eremeyev Michael J Cloud And
Leonid P Lebedev
https://ebookmeta.com/product/applications-of-tensor-analysis-in-
continuum-mechanics-1st-edition-victor-a-eremeyev-michael-j-
cloud-and-leonid-p-lebedev/
Modern Semiconductor Physics and Device Applications
1st Edition Vitalii Dugaev
https://ebookmeta.com/product/modern-semiconductor-physics-and-
device-applications-1st-edition-vitalii-dugaev/
Network Management in Cloud and Edge Computing Yuchao
Zhang
https://ebookmeta.com/product/network-management-in-cloud-and-
edge-computing-yuchao-zhang/
The Future of the Artificial Mind 1st Edition Alessio
Plebe
https://ebookmeta.com/product/the-future-of-the-artificial-
mind-1st-edition-alessio-plebe/
On Board Processing for Satellite Remote Sensing Images
1st Edition Guoqing Zhou
https://ebookmeta.com/product/on-board-processing-for-satellite-
remote-sensing-images-1st-edition-guoqing-zhou/
A Crab in the Cab Marv Alinas
https://ebookmeta.com/product/a-crab-in-the-cab-marv-alinas/
How to Start Your Own Cybersecurity Consulting
Business: First-Hand Lessons from a Burned-Out Ex-CISO
1st Edition Ravi Das
https://ebookmeta.com/product/how-to-start-your-own-
cybersecurity-consulting-business-first-hand-lessons-from-a-
burned-out-ex-ciso-1st-edition-ravi-das/
Top STEM Careers in Engineering 1st Edition Gina Hagler
https://ebookmeta.com/product/top-stem-careers-in-
engineering-1st-edition-gina-hagler/
Screening the Paris Suburbs From the Silent Era to The
1990s 1st Edition Philippe Met
https://ebookmeta.com/product/screening-the-paris-suburbs-from-
the-silent-era-to-the-1990s-1st-edition-philippe-met/
Theodor W Adorno s Philosophy Society and Aesthetics
1st Edition Stefano Petrucciani
https://ebookmeta.com/product/theodor-w-adorno-s-philosophy-
society-and-aesthetics-1st-edition-stefano-petrucciani/
Internet ofThings
Claudio Savaglio
Giancarlo Fortino
MengChu Zhou
Jianhua Ma Editors
Device-Edge-
Cloud
Continuum
Paradigms, Architectures and
Applications
Internet of Things
Technology, Communications and Computing
Series Editors
Giancarlo Fortino, Rende (CS), Italy
Antonio Liotta, Edinburgh Napier University, School of Computing, Edinburgh, UK
The series Internet of Things - Technologies, Communications and Computing
publishes new developments and advances in the various areas of the different facets
of the Internet of Things. The intent is to cover technology (smart devices, wireless
sensors, systems), communications (networks and protocols) and computing (the-
ory, middleware and applications) of the Internet of Things, as embedded in the
fields of engineering, computer science, life sciences, as well as the methodologies
behind them. The series contains monographs, lecture notes and edited volumes
in the Internet of Things research and development area, spanning the areas of
wireless sensor networks, autonomic networking, network protocol, agent-based
computing, artificial intelligence, self organizing systems, multi-sensor data fusion,
smart objects, and hybrid intelligent systems.
Indexing: Internet of Things is covered by Scopus and Ei-Compendex **
Claudio Savaglio • Giancarlo Fortino •
MengChu Zhou • Jianhua Ma
Editors
Device-Edge-Cloud
Continuum
Paradigms, Architectures and Applications
Editors
Claudio Savaglio
DIMES
Università della Calabria
Rende, Cosenza, Italy
MengChu Zhou
New Jersey Institute of Technology
Newark, NJ, USA
Giancarlo Fortino
DIMES
Universita della Calabria
Rende, Cosenza, Italy
Jianhua Ma
Hosei University
Tokyo, Japan
ISSN 2199-1073 ISSN 2199-1081 (electronic)
Internet of Things
ISBN 978-3-031-42193-8 ISBN 978-3-031-42194-5 (eBook)
https://doi.org/10.1007/978-3-031-42194-5
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland
AG 2024
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse
of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Paper in this product is recyclable.
Contents
Towards the Edge-Cloud Continuum Through the Serverless Workflows 1
Christian Sicari, Alessio Catalfamo, Lorenzo Carnevale,
Antonino Galletta, Antonio Celesti, Maria Fazio, and Massimo Villari
Firmware Dynamic Analysis Through Rewriting ............................ 19
Claudia Greco, Michele Ianni, Antonella Guzzo, and Giancarlo Fortino
Performance Analysis of a Blockchain for a Traceability System
Based on the IoT Sensor Units Along the Agri-Food Supply Chain........ 35
Maria Teresa Gaudio, Sudip Chakraborty, and Stefano Curcio
The Role of Federated Learning in Processing Cancer Patients’ Data..... 49
Mihailo Ilić, Mirjana Ivanović, Dušan Jakovetić, Vladimir Kurbalija,
Marko Otlokan, Miloš Savić, and Nataša Vujnović-Sedlar
Scheduling Offloading Decisions for Heterogeneous Drones on
Shared Edge Resources.......................................................... 69
Giorgos Polychronis and Spyros Lalis
Multi-objective Optimization Approach to High-Performance
Cloudlet Deployment and Task Offloading in Mobile-edge
Computing ....................................................................... 89
Xiaojian Zhu and MengChu Zhou
Towards Secure TinyML on a Standardized AI Architecture .............. 121
Muhammad Yasir Shabir, Gianluca Torta, Andrea Basso,
and Ferruccio Damiani
Deep Learning Meets Smart Agriculture: Using LSTM Networks
to Handle Anomalous and Missing Sensor Data in the Compute
Continuum ........................................................................ 141
Riccardo Cantini, Fabrizio Marozzo, and Alessio Orsino
v
vi Contents
Evaluating the Performance of a Multimodal Speaker Tracking
System at the Edge-to-Cloud Continuum ..................................... 155
Alessio Orsino, Riccardo Cantini, and Fabrizio Marozzo
A Deep Reinforcement Learning Strategy for Intelligent
Transportation Systems ......................................................... 167
Francesco Giannini, Giuseppe Franzè, Giancarlo Fortino,
and Francesco Pupo
Compressed Sensing-Based IoMT Applications .............................. 183
Bharat Lal, Qimeng Li, Raffaele Gravina, and Pasquale Corsonello
Occupancy Prediction in Buildings: State of the Art and Future
Directions ......................................................................... 203
Irfanullah Khan, Emilio Greco, Antonio Guerrieri,
and Giandomenico Spezzano
Index............................................................................... 231
Toward the Edge-Cloud Continuum
Through the Serverless Workflows
Christian Sicari, Alessio Catalfamo, Lorenzo Carnevale, Antonino Galletta,
Antonio Celesti, Maria Fazio, and Massimo Villari
1 Introduction
In the last years, we have witnessed the rise of edge computing, a new trend
that is in total opposition to cloud computing, aimed to collect and compute data
as closely as possible to the data source. Even if edge computing has rapidly
gained popularity, cloud has kept the leadership both for heavyweight jobs and data
persistence because of the hardness on migration and integration. The gap between
edge and cloud has been recently filled with an intermediate layer named fog, which
is in charge of redirecting information to cloud and edge. The composition and
orchestration of services between the three tiers have given rise to the cloud-edge
Continuum (or just Continuum) [26] paradigm. Continuum’s main goal consists of
taking advantage of cloud, edge, and eventually even fog, to run applications where
they best fit and readopting this, if something change in the environment or in the
QoS parameters the applications is trying to satisfy [24, 30, 40].
Deploying software at the continuum is considered challenging for many reasons,
such as architecture dependency, host federation, and global resource balancing,
[28, 36]. However, the serverless paradigm has been recently introduced with
the intent of making these problems surmountable. Serverless (i.e., function-as-
a-service model) is a platform-independent approach for deploying and exposing
services and relative APIs to final users, without worrying about the underlined
system. Function-as-a-service (FaaS) engines are typically based on an orchestrator
(i.e., Kubernetes) which is able to manage applications composed of many contain-
ers, load balance, and federate resources [24]. Serverless and FaaS paradigms are
C. Sicari () · A. Catalfamo · L. Carnevale · A. Galletta · A. Celesti · M. Fazio · M. Villari
Department of Mathematics and Computer Sciences, Physical Sciences and Earth Sciences,
University of Messina, Messina, Italy
e-mail: csicari@unime.it; alecatalfamo@unime.it; lcarnevale@unime.it; angalletta@unime.it;
acelesti@unime.it; mfazio@unime.it; mvillari@unime.it
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024
C. Savaglio et al. (eds.), Device-Edge-Cloud Continuum, Internet of Things,
https://doi.org/10.1007/978-3-031-42194-5_1
1
2 C. Sicari et al.
widely used in cloud-only applications, but thanks to their flexibility, some recent
works are emerging with the purpose of deploying functions into the edge of the
network for lightweight problems [3, 7, 33, 45]. For example, FaaS can be used
for isolated and low-decoupled tasks, but it is not ideal for complex and coupled
applications due to the impossibility of easily composing and integrating functions
[26]. These drawbacks generate issues for continuum environments where, typically,
applications are coupled in data-driven workflows with many tasks connected
among different computing tiers [2, 20, 36].
In this chapter, we propose (i) new research guidelines for serverless orches-
tration in the continuum paradigm and (ii) a reference blueprint for the standard
creation of a FaaS-based workflow orchestration. Specifically, we determine princi-
ples, definitions, a reference architectural model, and data structures that are useful
for defining and orchestrating serverless workflows. Once the baseline is defined,
we present a project called OpenWolf [42] as a ready-to-use solution for designing,
deploying, and using serverless workflows, composed of many functions spread
among the continuum. In order to evaluate the platform, we analyzed a deep learning
application for image classification in a smart city scenario, considering five steps:
collection, transformation, training, inference, and plotting.
The rest of the chapter is organized as follows. Section 2 describes the state
of the art behind serverless and workflow, highlighting weaknesses and strengths
of existing solutions. In Sect. 3 we describe the building blocks and the glossary
term for any serverless-based workflow engine. In Sect. 4 we design a cloud-
edge architecture used to manage and run serverless workflows. In Sect. 5 we
describe OpenWolf, an open-source project compliant with the reference archi-
tecture. In Sect. 6 we describe a machine learning typical workflow using the
glossary and the building block of this work; moreover, this workflow is tested using
OpenWolf and the performances described below. Finally, in Sect. 7 we summarize
the work presented and highlight the next research directions.
2 Background
The continuum aims to make a collaboration between the cloud and edge tiers
in order to distribute near real-time processing on edge and massive processing
on cloud [4]. Continuum faces several challenges related to different topics (i.e.,
security [41], scheduling) such that actual solutions [12] need to be reengineered to
become suitable for the computing continuum.
Recently, serverless computing has emerged as a solution for distributing small
functions using containers with the intent of reacting to external triggers (i.e.,
cronjobs, HTTP calls, message queue systems) [16, 46]. This new paradigm was
well received by the scientific community, which tries to exploit it for orchestrating
functions over the continuum [29, 35] by using different orchestrators, such as
Kubernetes [6], Nomad [8, 27], and more [13]. Moreover, FaaS is used in the
Towards the Edge-Cloud Continuum Through the Serverless Workflows 3
continuum to make the development, deployment, and automatic balancing easier,
thanks to the underlined orchestrators [5, 31, 43].
The combined use of continuum and serverless pointed out the problem of
composing functions, which means the capacity of concatenating functions for
creating more complex applications. The authors [1] proposed three principles of
serverless as (i) black-box functions, (ii) substitution, and (iii) double billing, which
attempt to explain that composing FaaS application could be considered an anti-
pattern. However, we do not agree with that statement.
The term workflow was used as a generic term for describing a well-defined
organization of tasks connected in order to transform one or more inputs to a
given output. In scientific literature, this term muted to scientific workflows, which
is described [22] as a way to deal with data and pipelined computation steps in
different application fields (i.e., bioinformatics, cheminformatics, ecoinformatics,
geoinformatics, physics), without mastering a computer science background. For
example, Kepler is a workflow grid-based, later extended [34] to support distributed
computing on grid computing. Almost in parallel, the Pegasus system [9] was
proposed to abstract the workflow as an ensemble of independent tasks. Such
technology continued to have a progressive evolution, keeping the track of newer
ones, such as grid [9, 21], Cloud [10], containers [19], and [44]. Going back to
the last five years, workflows gained new popularity because of the increasing
use of cloud computing and serverless. Indeed, the latter was widely adopted for
designing and implementing workflows [17]. Perez et al. [32] designed a framework
for executing Linux-based containers in a FaaS platform (i.e., AWS Lambda).
Jiang et al. [17] integrated the scientific workflow into the main FaaS providers
in order to exploit the serverless paradigm and make the implementation for end
users (i.e., scientists) easier. Skyport [14] was instead a brilliant idea for creating
black-box-based workflows, by means of an engine able to compose workflows
as soft virtualized software (i.e., Docker containers). Recently, the workflow has
become more sophisticated and accurate. It is not either a programming pattern or a
software architecture design, but a computational on-premise engine where defining,
storing, and deploying a composition of black-box functions [25]. Hyperstream
[11] is a domain-specific tool used to deploy machine learning (ML) algorithms
that are automatically fired by some incoming streaming data. One step ahead in
this direction was moved in [18], where the authors proposed a workflow engine
server (WES), which is a back-end engine used to store functions and workflows
and run them when triggered by an event. Such an engine introduces workflow
modularity and a validation schema, but it lacks integration with external systems
and expandability with other functions. One of the most autonomous engines has
been instead presented by Lopez et al. [23] with Triggerflow, a trigger-based
orchestration of serverless workflows. It lacks a user-friendly workflow editor, a
data schema for the functions, and an event global registry. However, Triggerflow
has clear strengths, such as a mechanism to fire triggering-based workflow, an
asynchronous communication channel, and a serverless model. A different approach
for workflows was, instead, presented in [37], where the authors propose R-Pulsar,
a cloud-edge engine that is able to trigger functions according to an interesting
4 C. Sicari et al.
matching algorithm based on a decoupled associative message (AR) selection
already presented in [38]. This helps in matching producers and consumers, as well
as taking actions, such as running a function and starting a data production [39].
The abovementioned approaches prove good flexibility mostly when related to ML
[15], but the utilization of serverless is still not totally well exploited.
3 Workflow Engine Characteristics and Principles
In this section, we put the stakes of the proposed workflow engine architecture, and
we define the dictionary of terms that are used in the remainder of this paper, i.e.,
(i) state, (ii) event, (iii) workflow, and (iv) manifest
3.1 State
The main component of the architecture is the state. It mainly encapsulates a
function and all the information related to it within a job. It is stateless, which means
that the running job is not aware of other jobs interacting with it, and therefore
the job behavior cannot change based on previous executions. As shown in Fig. 1,
the job is composed of (i) metadata and (ii) a function. The latter is the code that
includes the job’s business logic, and it is encapsulated inside a container.
The metadata includes four different pieces of information, such as state
description, handler instructions, input schema, and output schema. Specifically,
they are described as follows:
Job description contains the job identifier, name, service description,
and service class. They are used to quickly classify the
service.
BOOT
STRAP
 /
HANDLER
 /
METADATA
INPUT
SCHEMA
 /
OUTPUT
SCHEMA
 /
SERVICE
DESCR.
TXT
Function
Fig. 1 The state encapsulates a function and all the information related to it within a job
Towards the Edge-Cloud Continuum Through the Serverless Workflows 5
Bootstrap instructions are run for instantiating a job inside the workflow engine.
These could contain the code to either build an image, set
the environment variables, or run a docker container.
Handler instructions are run every time a job is triggered. Basically, these
validate the input schema, run a function using the passed
and parsed parameters, wait for the function result, and
finally parse results with a format compliant with the
output schema.
Input/output schemas contain the schema of the acceptable input and the
schema of the provided output. They are essential for
creating compatible job chains.
Often, workflows also contain the connectors, a special kind of job that simply
maps a job’s output to the next jobs’ input, according to their input/output schema.
It is created on-premise during the workflow design, and it does not require an input
and output predefined schema, since they change according to the workflow where
they are located.
3.2 Event
An event is the only entity that can be processed in a workflow; it is originally sent
from outside the workflow and then processed inside the workflow. All changes
applied to an event are separately stored in a data lake, while the last version of
the event is propagated through the workflows’ jobs. An event is composed of both
immutable and mutable data. The immutable data includes the following:
Event ID identifies the event uniquely, and it is managed directly by the
workflow engine.
Workflow ID is a reference to the workflow which is processing/has processed
the event.
The mutable data are generally updated by the workflow engine and by the jobs
that process the event. This includes the following:
Status is a value in the following domain: .Started, Processing, Error,
Processed..
Data is the last job’s output.
Timestamp represents the date and time in which the last transformation has been
completed.
3.3 Workflow
The workflow diagram shown in Fig. 2 represents how states interact between each
other. The workflow starts when the first node is triggered by an external event, i.e.,
6 C. Sicari et al.
Fig. 2 Workflow example
the action 1 in Fig. 2, carrying on a data payload. Any event is directly connected
to a state (action 2) and therefore to a connector (action 3). Connectors act as
conciliators for filtering events with a specific state.
The first event is unique, and it is mapped one-to-one to a single workflow
execution. This avoids overlaps with other events that follow the same workflow.
Naturally, when an event passes through the states, it modifies its data according to
the output of the previous state.
Within the workflow, any kind of links are allowed, such as many-to-many,
many-to-one, and one-to-many. However, they must start and finish with only one
job. When a many-to-one relationship (action 5 in Fig. 2) is defined, the triggering
condition needs to be explained. In this regard, the condition may follow the
Boolean algebra, i.e., using AND for combining two or more events that must be
received before firing the next one, or using OR for combining two or more events
according to the fact the only one of them is enough for firing the next state.
The workflow diagram shown in Fig. 2 is an example of e-commerce scenarios,
where customers are notified both by an email and a short message system as soon
as a product on which they are interested in is again available. Furthermore, the
workflow is triggered by a web notification which says a given product is available
again. The workflow fetches the users interested in this item using the state J0, J1,
and J2 and then fetches the users’ email and telephone number. Finally, the state J3 is
used to notify the users. In this scenario, three connectors are used. Two connectors
make compatible the J0’s output with the J1 and J2’s input; the last one maps instead
the J1 and J2’s output with the J3’s input.
3.4 Workflow Manifest
In order to describe a workflow within a schema, we propose a manifest based on
YAML format. The manifest, therefore, translates in processes what was designed,
i.e., in Fig. 2.
Towards the Edge-Cloud Continuum Through the Serverless Workflows 7
Listing 1 Workflow Manifest Example
name: workflow-name
callbackUrl: uri-where-to-send-result
states:
state-id:
function:
ref: ref-to-function-id
config:
key: value
start: true
handlers:
handler-id:
endpoint: endpoint-to-function
config:
key: value
workflow:
state-id:
activation: Boolean Equation
inputFilter: jq command
outputFilter: jq command
As shown in the Listing 1, the manifest has (i) a name, (ii) a callback URL where
the result is sent, and three more sections, such as (iii) states, (iv) handlers, and (v)
workflow.
States list and describe all the status of the workflow. For each state, we
define a name, handler, and global key-value configuration for the
handler.
Handlers describe all the handlers called within the states. This attribute deter-
mines how to call the handler and the basic configurations that may be
overwritten in the states’ parts. The separation of states and functions
sections allows using multiple times the same handler in different
states.
Workflow describes how the states interact. For each state, we determine which
previous states have triggered it and how to transform inputs and
outputs. This part acts as a connector.
4 Architecture
The reference architecture for managing a serverless workflow is shown in Fig. 3.
It is a four-layered architecture composed of (i) infrastructure, (ii) federation, (iii)
serverless, and (iv) service layers. All layers are described as follows.
8 C. Sicari et al.
Fig. 3 Workflow engine
architecture
The infrastructure layer contains the bare-metal nodes that are part of the con-
tinuum environment. Nodes may have different geographical locations, architecture
characteristics, and distributions. The federation layer creates a communication
interoperability among the nodes of the infrastructure layer. It is composed of
an overlay network used to connect nodes with a message-oriented middleware
(MOM), with the intent of exchanging data over the overlay itself. The serverless
layer provides FaaS features to the underlined layer, i.e., the service layer. It uses
a container orchestrator for deploying functions among the federation. It includes
a function repository for storing the functions in the system, a compiler to build
the same function in all the architecture available and compatible, and a gateway
used to trigger the functions. The service layer is, instead, the top layer of the
architecture. It adds capability of composition to the serverless layer. The service
layer is composed of an event history database (EHD), a workflow repository, and
a single agent. The EHD stores a permanent history of event transformation within
the engine. Indeed, an event changes its mutable content when it is the input of a job.
However, if a workflow is composed of n-jobs, the initial event will have n changes.
Thus, the EHD stores all the n changes, along with the initial content. Furthermore,
we had to consider a status history array field in the event data structure, as shown
in Fig. 4. This approach allows to (i) keep track of the event history, (ii) keep track
of the event transformation, (iii) log every change, and (iv) recover any workflow
state. The workflow repository stores the manifest files that contain the workflow
Towards the Edge-Cloud Continuum Through the Serverless Workflows 9
Fig. 4 Event data model
descriptions according to the structure defined in Sect. 3.4. The broker coordinates
the service layer and, more in general, the overall infrastructure. It is basically in
charge of receiving the external events and intercepting the execution of a function
inside a triggered state, recognizing that it uses the proper workflow manifest in the
workflow repository, and then updates the EHD for saving the actual data coming
from the events or from the states.
5 OpenWolf: Serverless Workflow Engine
The architecture shown in Fig. 3 is implemented in an open-source project currently
under development, called OpenWolf [42]. The OpenWolf architecture is shown in
Fig. 5, and it is composed of four main elements: (i) Kubernetes, (ii) OpenFaaS, (iii)
Redis, and (iv) the OpenWolf agent.
Kubernetes works between the federation and serverless layers. It is used to
federate continuum nodes using an own defined overlay network. Moreover, it also
provides the orchestration tools needed by the serverless layer for deploying func-
tions among the continuum. OpenFaaS works at the serverless layer as the engine
to store, compile, deploy, and manage functions in conjunction with Kubernetes.
Redis works inside the service layer, and it acts as EHD and workflow repository.
Indeed, it stores the workflow manifests, but it also keeps track of the workflow
executions. For the latter, OpenWolf uses a well-defined event structure expressed
using a JSON format, in which the main properties are called ctx and data. The
first one represents the event context, and it is composed of the workflowID, which
references the workflow to which event it belongs; the execID, which distinguishes
the different executions of the same workflow; and the state, which references the
state that has returned the event. The data property, instead, is the function’s output
10 C. Sicari et al.
Fig. 5 Workflow engine
architecture
itself, and, unlike the ctx that is read and set by the workflow agent, this is fully
managed by the function.
An event example is proposed in the Listing 2, which is fired by State C in the
workflow shown in Fig. 6.
Listing 2 Event Data Structure
{
ctx: {
workflowID: inference-traffic,
execID: inference-traffic.123,
state: C
},
data: {
AIQ: 47,
Scale: EU
}
}
Towards the Edge-Cloud Continuum Through the Serverless Workflows 11
Fig. 6 Example of workflow in data analytics
OpenWolf agent acts as a broker for the workflow statuses, as it is used to
achieve the function composition feature. OpenWolf ensures that any event will
follow the correct path in the workflow and triggers the correct states in the
workflow with a proper transformation of the right income event. In this regard,
the OpenWolf agent is deployed as a standalone stateless microservice inside the
same Kubernetes cluster used to run the serverless functions. The agent exposes
two interfaces. The first one is a public interface used to trigger a workflow from
the external. The second one is closed inside the Kubernetes cluster, and it is used
as a callback URL for each asynchronous function triggered by any workflow. By
doing that, the agent intercepts all the events belonging to a workflow, extracts the
context information, and uses it for fetching all the workflow and current execution
information. Therefore, it triggers the next states in the manifest, forwarding the
right received event with the updated ctx property. This process is described more
concisely in the activity diagram in Fig. 7.
6 Use Case
Smart cities are a typical scenario for a computing continuum use case. For example,
in private and public spaces, we could find Internet of Things (IoT) sensors and
small computing devices, like cameras and Raspberry Pi for monitoring buildings,
traffic, or environment parameters. These data are then typically processed in local
data factories provided by private citizens, municipalities, or research institutes, and
often they trust on private cloud providers like AWS or Azure, i.e., for long-time
storage or processing. As a consequence, it is easy to find the three continuum layers
over them. In the following, we will analyze a typical pipe for image processing.
Smart cities rely on this kind of algorithm for detecting violent and dangerous
situations, traffic rule violation, or roadside surveillance application (Fig. 8).
12 C. Sicari et al.
Fig. 7 OpenWolf agent actions
Towards the Edge-Cloud Continuum Through the Serverless Workflows 13
Fig. 8 OpenWolf for image processing
The designed image processing workflow is composed of five states. Each state
represents a function, i.e., a process inside the workflow. Each state is deployed
within one of the computing continuum tiers according to the static scheduling rule
defined in the workflow manifest. According to the states’ descriptions:
Collect exploits a camera stream for collecting environment images.
Transform edits the images, cleaning and filtering noisy data. It can be run on
any of the continuum’s tiers.
Train trains a recurrent neural network (RNN) model used to analyze the
collected images.
Inference predicts the input image’s label using the latest model produced by
the train state.
Show pushes the result of the inference over a web page.
The first problem we identify on continuum, mostly when FaaS is implemented,
is having a good scheduler for deploying functions according to specific quality of
services (QoS), i.e., latency, network bandwidth usage, and resource performances.
The second problem, which is relied on the first one, is where to put the data.
These typically are collected on the edge, but they could be only partially computed
on the edge or delivered to the cloud for massive analysis. QoS are directly
dependent on the service we are providing in the smart city. For example, road
traffic monitoring could require optimizing accuracy, whereas shotgun detection
could require real-time analysis. Our proposed solution aims to give the possibility
to directly customize what and where data are processed, trying to satisfy any kind
of QoS.
14 C. Sicari et al.
400
300
200
100
0
Cloud Edge Continuum
10
40
55
35
1,6
300 35
1,6
55
Performance Comparison
Inference Data Fetch Train
Fig. 9 Workflow comparison
6.1 Performance Evaluation
We evaluated the workflow in three different environments. We considered three
key workflows’ moments: (i) training, (ii) data fetching, and (iii) data inference in a
full-cloud, a full-edge, and a continuum test bed. In the latter case, cloud nodes were
in charge of the model training, whereas edge nodes were focused on collecting
and inferencing data. These three functions have been encapsulated inside three
different OpenFaaS functions. The dataset adopted is the CIFAR-10, the algorithm
was trained for 50 epochs, and the implementation of the RNN was achieved with
PyTorch. The data training size is 130 MB, while the data test size, used during the
inference, is around 100 MB. As an edge node, we used a single Raspberry Pi 4, with
an ARM64 operating system, a 4 GB of RAM, and a 1.5 GHz quad-core processor.
As a cloud node, we used a virtual machine with 16 GB of RAM, a 2.8 GHz quad-
core processor, and a x64 operating system.
Results are shown in Fig. 9. The edge node needed 400% of the cloud perfor-
mance for training, and 120% for inferencing data, but the time to use local data
is close to zero. On the other hand, cloud requires 45 seconds for transferring data
from the edge object storage to the local storage. Moreover, edge device does not
require network usage, but cloud will use WAN network for receiving the entire
test dataset from the edge object storage. Finally, distributing the computation over
the continuum allows exploiting the cloud training time and the edge data locality,
avoiding any massive network usage. Unfortunately, as a direct consequence, the
inference is done inside the edge, but as shown in Fig. 9, the overall performance in
the continuum is better than both the edge and the cloud.
Towards the Edge-Cloud Continuum Through the Serverless Workflows 15
7 Conclusion and Future Works
In the era of serverless and microservice architecture, workflows are slowly going
to gain popularity as a tool to mix serverless services and to deploy them in order to
compose complex functions in modern engine infrastructure based on the cloud.
Historically, workflows are recognized as a computation chain where the pro-
cesses involved depend on the specific field where they are acted. In the last two
years, this term instead has started to appear in different fields, like microservices,
FaaS, and cloud-edge continuum. In some way, the scientific community shares the
idea that workflows allow enabling the cooperation between functions, services, and
in general network hosts.
This trend is fully reasonable since we managed to deploy functions and services
everywhere, just to think to the new concept of “Internet of Everything.” However,
we did not manage to link these capabilities together. To try to reach this scope,
different open-source and enterprise providers proposed different “linking services,”
which have been called FaaS orchestrators. These are of course valid products, but
do not trust on a standard and are not integrable, and each of them does not absolve
at all to all the requirements a function workflows could ask.
In this scenario, we started from scratch, defining the workflow concept.
Therefore, we firstly determined what are the elements involved in workflows, i.e.,
jobs, events, and how they are related together. After that, we defined a designing
schema for workflows with clear terms, figures, and data models. Finally, using
these tools, we proposed a reference architecture for the management of a workflow
platform over a continuum cluster. After the introduction of those glossary terms and
architecture patterns, we validate our work presenting OpenWolf, a recently born
open-source engine, used to design and develop FaaS workflows for heterogeneous
Kubernetes clusters, and we measured its capabilities considering a continuous
learning workflow applied to a smart city environment, used to keep under security
control a square or a street.
This work can be considered a starting point for the serverless workflow field,
but we still have to deal with different challenges, like (i) designing the security
aspects on the engine, (ii) designing the fault tolerance aspects on the nodes, and (iii)
implementing a workflow engine that is able to respect this reference architecture.
All these challenges will be faced up in the next future works, with the main intent
of providing a usable prototype of a workflow engine platform.
References
1. I. Baldini, P. Cheng, S. Fink, N. Mitchell, V. Muthusamy, R. Rabbah, P. Suter, O. Tardieu,
The serverless trilemma: function composition for serverless computing, in Proceedings of the
2017 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections
on Programming and Software (2017), pp. 89–103. https://doi.org/10.1145/3133850.3133855
16 C. Sicari et al.
2. D. Balouek-Thomert, E. Renart, A. Zamani, A. Simonet, M. Parashar, Towards a computing
continuum: enabling edge-to-cloud integration for data-driven workflows. Int. J. High Perform.
Comput. Appl. 33, 1159–1174 (2019).
3. L. Baresi, D. Filgueira Mendonça, Towards a serverless platform for edge computing, in 2019
IEEE International Conference on Fog Computing (ICFC) (2019), pp. 1–10
4. L. Bittencourt, R. Immich, R. Sakellariou, N. Fonseca, E. Madeira, M. Curado, L. Villas,
L. Silva, C. Lee, O. Rana, The internet of things, fog and cloud continuum: integration and
challenges. Internet Things 3–4, 135–155 (2018). https://arxiv.org/abs/1809.09972
5. A. Bocci, S. Forti, G. Ferrari, A. Brogi, Type, pad, and place: avoiding data leaks in Cloud-
IoT FaaS orchestrations, in 2022 22nd IEEE International Symposium on Cluster, Cloud and
Internet Computing (CCGrid) (2022), pp. 798–805
6. R. Boutaba, The Cloud to Things Continuum, Association for Computing Machinery, Virtual
Event, Canada (2021). https://dl.acm.org/doi/10.1145/3501255.3501407
7. M. Ciavotta, D. Motterlini, M. Savi, A. Tundo, DFaaS: decentralized function-as-a-service for
federated edge computing, in 2021 IEEE 10th International Conference on Cloud Networking
(CloudNet) (2021), pp. 1–4
8. I. Cilic, I. Zarko, M. Kusek, Towards service orchestration for the cloud-to-thing continuum,
in 2021 6th International Conference on Smart and Sustainable Technologies, SpliTech 2021
(2021)
9. E. Deelman, J. Blythe, Y. Gil, C. Kesselman, G. Mehta, S. Patil, M. Su, K. Vahi, M. Livny,
Pegasus: mapping scientific workflows onto the grid. Lecture Notes Computer Science
(Including Subseries Lecture Notes Artificial Intelligence Lecture Notes Bioinformatics) 3165,
11–20 (2004)
10. E. Deelman, K. Vahi, M. Rynge, G. Juve, R. Mayani, R. Silva, Pegasus in the cloud: science
automation through workflow technologies. IEEE Internet Comput. 20, 70–76 (2016)
11. T. Diethe, M. Kull, N. Twomey, K. Sokol, H. Song, M. Perello-Nieto, E. Tonkin, P. Flach,
HyperStream: a workflow engine for streaming data (2019). http://arxiv.org/abs/1908.02858
12. S. Dustdar, V. Pujol, P. Donta, On distributed computing continuum systems. IEEE Trans.
Knowl. Data Eng. XX, 1–14 (2022)
13. N. Faria, D. Costa, J. Pereira, R. Vilaça, L. Ferreira, F. Coelho, AIDA-DB: a data management
architecture for the edge and cloud continuum, in 2022 IEEE 19th Annual Consumer
Communications  Networking Conference (CCNC) (2022), pp. 1–6
14. W. Gerlach, W. Tang, A. Wilke, D. Olson, F. Meyer, Container orchestration for scientific
workflows, in Proceedings – 2015 IEEE International Conference on Cloud Engineering, IC2E
2015 (2015), pp. 377–378
15. Z. Houmani, D. Balouek-Thomert, E. Caron, M. Parashar, Enabling microservices manage-
ment for deep learning applications across the Edge-Cloud Continuum, in 2021 IEEE 33rd
International Symposium on Computer Architecture and High Performance Computing (SBAC-
PAD) (2021), pp. 137–146
16. A. Jangda, D. Pinckney, Y. Brun, A. Guha, Formal foundations of serverless computing. Proc.
ACM Program. Lang. 3, 1–26 (2019). https://dl.acm.org/doi/10.1145/3360575
17. Q. Jiang, Y. Lee, A. Zomaya, Serverless execution of scientific workflows. Lecture Notes
Computer Science (Including Subseries Lecture Notes Artificial Intelligence Lecture Notes
Bioinformatics), LNCS, vol. 10601, pp. 706–721 (2017)
18. A. Jasinski, Y. Qiao, J. Keeney, E. Fallon, R. Flynn, A workflow engine server for the design
of adaptive and scalable workflows, in 30th Irish Signals and Systems Conference, ISSC 2019
(2019)
19. D. Kimovski, R. Mathá, J. Hammer, N. Mehran, H. Hellwagner, R. Prodan, Cloud, fog, or
edge: where to compute?. IEEE Internet Comput. 25, 30–36 (2021)
20. D. Kimovski, C. Bauer, N. Mehran, R. Prodan, Big data pipeline scheduling and adaptation on
the computing continuum, in 2022 IEEE 46th Annual Computers, Software, and Applications
Conference (COMPSAC) (2022), pp. 1153–1158
21. K. Lee, N. Paton, R. Sakellariou, E. Deelman, A. Fernandes, G. Mehta, Adaptive workflow
processing and execution in Pegasus, in 2008 The 3rd International Conference on Grid and
Pervasive Computing – Workshops (2008), pp. 99–106
Towards the Edge-Cloud Continuum Through the Serverless Workflows 17
22. B. Ludäscher, I. Altintas, C. Berkley, D. Higgins, E. Jaeger, M. Jones, E. Lee, J. Tao, Y. Zhao,
Scientific workflow management and the KEPLER system. Concurrency Comput. Prac. Exp.
18, 1039–1065 (2006)
23. P. López, A. Arjona, J. Sampé, A. Slominski, L. Villard, Triggerflow: trigger-based orches-
tration of serverless workflows, in DEBS 2020 – Proceedings of the 14th ACM International
Conference on Distributed and Event-Based Systems (2020), pp. 3–14
24. A. Luckow, K. Rattan, S. Jha, Pilot-edge: distributed resource management along the edge-to-
cloud continuum, in 2021 IEEE International Parallel and Distributed Processing Symposium
Workshops (IPDPSW) (2021), pp. 874–878
25. M. Malawski, A. Gajek, A. Zima, B. Balis, K. Figiela, Serverless execution of scientific
workflows: experiments with HyperFlow, AWS Lambda and Google Cloud Functions. Futur.
Gener. Comput. Syst. 110, 502–514 (2020). https://www.sciencedirect.com/science/article/pii/
S0167739X1730047X
26. X. Masip-bruin, E. Marín-tordera, S. Sánchez-lópez, J. Garcia, A. Jukan, A. Ferrer, A. Queralt,
A. Salis, A. Bartoli, M. Cankar, C. Cordeiro, J. Jensen, J. Kennedy, Managing the cloud
continuum: lessons learnt from a real fog-to-cloud deployment. Sensors 21(9) (2021). https://
www.mdpi.com/1424-8220/21/9/2974
27. X. Merino, C. Otero, D. Nieves-Acaron, B. Luchterhand, Towards orchestration in the cloud-
fog continuum, in Conference Proceedings – IEEE SOUTHEASTCON, vol. 2021 (2021)
28. H. Mueller, S. Gogouvitis, H. Haitof, A. Seitz, B. Bruegge, Poster abstract: continuous
computing from cloud to edge, in 2016 IEEE/ACM Symposium on Edge Computing (SEC)
(2016), pp. 97–98 (2016)
29. D. Mukherjee, D. Pal, P. Misra, Workflow for the Internet of Things, in ICEIS 2017 -
Proceedings of the 19th International Conference on Enterprise Information Systems, vol. 2,
Porto, Portugal, April 26–29 (2017)
30. A. Morichetta, V. Pujol, S. Dustdar, A roadmap on learning and reasoning for distributed
computing continuum ecosystems, in 2021 IEEE International Conference on Edge Computing
(EDGE) (2021), pp. 25–31
31. E. Paraskevoulakou, D. Kyriazis, Leveraging the serverless paradigm for realizing machine
learning pipelines across the edge-cloud continuum, in 2021 24th Conference on Innovation in
Clouds, Internet and Networks and Workshops (ICIN) (2021), pp. 110–117
32. A. Pérez, G. Moltó, M. Caballer, A. Calatrava, Serverless computing for container-based
architectures. Futur. Gener. Comput. Syst. 83, 50–59 (2018)
33. T. Pfandzelter, D. Bermbach, tinyFaaS: a lightweight FaaS platform for edge environments, in
2020 IEEE International Conference on Fog Computing (ICFC) (2020), pp. 17–24
34. M. Płóciennik, T. Zok, A. Gómez-Iglesias, F. Castejón, A. Bustos, M. Rodríguez-Pascua,
J. Velasco, Workflows orchestration in distributed computing infrastructures, in Proceedings
of the 2012 International Conference on High Performance Computing and Simulation, HPCS
2012 (2012), pp. 616–622
35. P. Raith, S. Nastic, S. Dustdar, Serverless edge computing – where we are and what lies ahead.
IEEE Internet Comput. 27(3), 50–64 (2023)
36. A. Ranjan, F. Guim, M. Chincholkar, P. Ramchandran, R. Mishra, S. Ranganath, Convergence
of edge services  edge infrastructure. 2021 IEEE Conference on Network Function Virtual-
ization and Software Defined Networks (NFV-SDN) (2021), pp. 96–99
37. E. Renart, D. Balouek-Thomert, M. Parashar, An edge-based framework for enabling data-
driven pipelines for IoT systems, in 2019 IEEE International Parallel and Distributed
Processing Symposium Workshops (IPDPSW) (2019), pp. 885–894
38. E. Renart, J. Diaz-Montes, M. Parashar, Data-driven stream processing at the edge, in 2017
IEEE 1st International Conference on Fog and Edge Computing (ICFEC) (2017), pp. 31–40
39. E. Renart, D. Balouek-Thomert, X. Hu, J. Gong, M. Parashar, Online decision-making
using edge resources for content-driven stream processing, in 2017 IEEE 13th International
Conference on E-Science (e-Science) (2017), pp. 384–392
40. D. Rosendo, A. Costan, G. Antoniu, M. Simonin, J. Lombardo, A. Joly, P. Valduriez, Repro-
ducible performance optimization of complex applications on the edge-to-cloud continuum, in
2021 IEEE International Conference on Cluster Computing (CLUSTER) (2021), pp. 23–34
18 C. Sicari et al.
41. A. Ruggeri, A. Celesti, M. Fazio, M. Villari, An innovative blockchain-based orchestrator for
osmotic computing. J. Grid Comput. 20, 1–17 (2022)
42. C. Sicari, L. Carnevale, A. Galletta, M. Villari, OpenWolf: a serverless workflow engine
for native cloud-edge continuum, in He 7th IEEE Cyber Science and Technology Congress
(CyberSciTech 2022) (2022)
43. C. Smith, A. Jindal, M. Chadha, M. Gerndt, S. Benedict, FaDO: FaaS functions and data
orchestrator for multiple serverless edge-cloud clusters, in 2022 IEEE 6th International
Conference on Fog and Edge Computing (ICFEC) (2022), pp. 17–25
44. K. Vahi, M. Rynge, G. Papadimitriou, D. Brown, R. Mayani, R. Silva, E. Deelman, A. Mandal,
E. Lyons, M. Zink, Custom execution environments with containers in Pegasus-enabled
scientific workflows, in 2019 15th International Conference on EScience (eScience) (2019),
pp. 281–290
45. B. Vincenzo et al., Disclosing edge intelligence: a systematic meta-survey. Big Data Cogn.
Comput. 7(1), 44 (2023)
46. J. Wen, X. Jin, Rise of the planet of serverless computing: a systematic review; Rise of the
planet of serverless computing: a systematic review. ACM Trans. Softw. Eng. Methodol. 32,
1–61 (2023). https://dl.acm.org/doi/10.1145/3579643
Firmware Dynamic Analysis Through
Rewriting
Claudia Greco, Michele Ianni, Antonella Guzzo, and Giancarlo Fortino
1 Introduction
The spread of Internet of Things (IoT) devices and their full integration into
everyday life is one of the major factors defining the current technology landscape.
With the embedding of computational power and persistent connectivity to the
Internet, an ever-increasing number of everyday objects are now IoT devices and
created new levels of automation, efficiency, and convenience. IoT devices are
being used in a wide range of applications and ecosystems, including smart homes,
healthcare, transportation, industrial settings, and daily living [1, 2]. By gathering
and transmitting data, smart objects are enabling new possibilities for innovation
and improvement. Considering their constant use and the access they have to our
data, ensuring that these devices are safe is an urgent concern, exacerbated by the
fact that they still lack of adequate security and safety measures, putting privacy at
risk and making IoT devices increasingly appealing targets for attackers [3]. The
vulnerabilities present in IoT devices make them highly susceptible to attacks, and
they are frequently viewed as low-hanging fruit by malicious actors, owing to their
ease of exploitation [4]. The necessity of conducting a thorough, security-focused
evaluation of IoT devices has been well-established [5]. However, conventional
analysis methods are often not suitable for the IoT environment: the dynamic
analysis of the firmware of these devices typically requires that code is not executed
in the device’s native execution environment, but in a controlled one. The reason
behind this is manifold. First of all, the use of dynamic analysis on the native device
may require expensive hardware which may not be readily available to the analyst
C. Greco () · M. Ianni · A. Guzzo · G. Fortino
Department of Computer Science, Modeling, Electronics and System Engineering (DIMES),
University of Calabria, Arcavacata, Italy
e-mail: claudia.greco@dimes.unical.it; michele.ianni@unical.it; antonella.guzzo@unical.it;
giancarlo.fortino@unical.it
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024
C. Savaglio et al. (eds.), Device-Edge-Cloud Continuum, Internet of Things,
https://doi.org/10.1007/978-3-031-42194-5_2
19
20 C. Greco et al.
and that may be prone to damage during testing. Also, it can be hard to supply
inputs to guide analysis (as happens with fuzzing) and debugging. As explained
in [6] using fuzzing, it is hard to detect memory corruption vulnerabilities on low-
cost bare-metal devices lacking security mechanisms, because of the lack of visible
effects. While, in theory, the possibility of using the device’s ports for debugging
exists, these ports are often obscured or inaccessible. Additionally, with physical
hardware, it is not feasible to perform concurrent executions, which is essential for
dynamic analysis.
For this purpose, we rely on the emulation of the firmware, better known as
firmware re-hosting. This process separates the firmware from the hardware and
emulates it to run on a different architecture without the need for actual hardware.
Firmware re-hosting offers several benefits, including the ability to execute in a
controlled environment, use debuggers for greater insight, concentrate solely on
software components, and benefit from scalability. However, that of firmware re-
hosting is a challenging task due to the fact that the firmware frequently retrieves
input directly from the device peripherals, which can have their own unique access
definitions and different configurations and interfaces.
Several solutions have been proposed in literature for firmware re-hosting,
based on approaches such as hardware-in-the-loop (HITL), low-level abstractions,
learning, or symbolic execution. Despite the significant progress that has been
made in the area of automated re-hosting and analysis of firmware, the current
solutions come with drawbacks. While the hardware-in-the-loop approach has its
advantages, it is often not feasible due to the difficulty or impossibility of obtaining
real hardware, or the risk of damaging expensive components during large-scale
automated analysis. Approaches based on abstractions usually incorporate binary
instrumentation techniques to intercept calls to functions that interact with the
hardware. Binary instrumentation adds a substantial overhead to an already slow
emulation environment and severely impacts the performance of dynamic vulnera-
bility discovery techniques like fuzzing, which requires a high number of executions
of binary code to take place.
This chapter has different goals: we analyze the current solutions adopted in
literature to enable dynamic analysis on the re-hosted firmware and their limitations.
We offer our point of view on how progress can be made in this context in order
to enable faster vulnerability discovery processes by using traditionally employed
techniques such as fuzzing. In particular, we discuss our idea of replacing the device
peripherals and the interactions between the firmware and hardware with high-level
operations, bringing all the firmware functioning at the software level. This chapter
directly addresses a crucial aspect within the broader domain of the device edge
cloud continuum. By exploring the security challenges and vulnerabilities of IoT
devices, specifically focusing on firmware re-hosting and dynamic analysis, our
research aligns with the overarching theme of advancing paradigms, architectures,
and applications in this interconnected landscape.
The chapter is organized as follows: in Sect. 2, we provide background on the
concept of firmware re-hosting and its necessity for dynamic analysis, along with
basic notions about vulnerability discovery well-known techniques and different
Firmware Dynamic Analysis Through Rewriting 21
levels of analysis. In Sect. 3, we discuss hardware emulation, its challenges, and
the limitations of the existing approaches. In Sect. 4, we review the state of the art
in firmware emulation and discuss the drawbacks of existing solutions. Finally, in
Sect. 5, we describe the idea behind our proposal.
2 Background
Program analysis plays a crucial role when dealing with the security assessment
of software systems, and, over the years, there has been significant progress in the
development of new techniques and methodologies to accomplish this task. Great
effort has been put to make the process of program analysis scalable, leading to
the development of dynamic analysis tools such as fuzzers and symbolic execution
engines that allow to analyze binary programs that, as often occurs, do not come
with their source code. Popular examples in the security community of tools for
fuzzing and symbolic execution are AFL [7] and angr [8]. However, program
analysis tools, especially when performing dynamic analysis, need high levels of
parallelism and scalability to function properly, which necessitates moving the
execution into an emulated environment.
With the advent of IoT, the use of such tools, firstly meant to be applied
on desktop and mobile systems, has been extended to various devices and their
firmware. The necessity to shift the execution to a virtual environment in the case of
firmware is supported by its dependency on the hardware, which limits the ability
to observe its behavior. For example, in order to perform security testing of a
firmware by means of fuzzing, it would be often necessary to substitute the values
derived from peripheral interactions with inputs generated by a fuzzer. It follows
the execution must be taken out of its native environment and performed into an
emulated and controlled environment, without involving physical components. The
process of emulating a firmware in a way that accurately replicates its behavior on
real hardware is referred to as re-hosting. Firmware re-hosting allows to thoroughly
examine and manipulate firmware in manners that are not feasible on physical
hardware and offers many benefits to analysts, including the ability to limit the
scope of analysis to software components alone while providing scalability. In this
way, no physical embedded component of the device is necessary for the security
analysis process to be held, and the program execution can be attached to debuggers,
making it possible to get a more in-depth understanding of program execution.
Unfortunately, the task of firmware re-hosting is not without challenges, since
while running the firmware interacts with peripherals. It follows that in order to
gain a properly functioning emulation of the embedded system, firmware re-hosting
implies modeling, along the CPU, the behavior of the device peripherals. Peripherals
fall in two categories: on-chip peripherals include components such as timers, bus
controllers, networking elements, and serial ports, while off-chip peripherals include
sensors, actuators, external storage devices, and other circuit board circuitry that is
accessed via on-chip peripherals. Specifically, on-chip peripherals, such as general-
22 C. Greco et al.
purpose input/output (GPIO) or bus interfaces like inter-integrated circuit (I2C) and
serial peripheral interface (SPI), intermediate the firmware and off-chip peripheral
communications and are typically controlled by the CPU through memory-mapped
input/output (MMIO), allowing programs to access them via memory. The absence
of these components can result in the firmware crashing or producing outcomes
that deviate from those generated when real hardware is employed, and since a
good many of system functions involve interactions with both on-chip and off-chip
peripherals, realizing a proper emulation of them is vital.
2.1 Vulnerability Discovery Techniques
As we stated, there are some popular dynamic analysis techniques employed during
the vulnerability discovery process of a system that benefit from its emulation.
In this section, we briefly outline some of the most popular dynamic analysis
methodologies.
2.1.1 Fuzzing
The goal of fuzzing is to identify inputs that cause the program to behave in
unexpected ways or even crash, revealing the presence of vulnerabilities, bugs, or
other security issues in the programs that could be exploited by attackers. During
fuzzing, a large number of values randomly or semi-randomly generated by a fuzzer
are given as input to the target program, which is then monitored for possible
crashes, errors, or unexpected outputs. Popular fuzzers are AFL [7], libFuzzer,1 and
Honggfuzz.2
2.1.2 Concolic Execution
combines symbolic execution and concrete execution to analyze the behavior of
computer programs. It runs the program with concrete inputs while maintaining a
symbolic state. It is also known as dynamic symbolic execution, and it is different
from static symbolic execution as it explores only one path at a time, determined by
the concrete inputs. To explore different paths, the technique “flips” path constraints
and uses a constraint solver to calculate concrete inputs that result in alternative
branches.
1 https://llvm.org/docs/LibFuzzer.html
2 https://github.com/google/honggfuzz
Firmware Dynamic Analysis Through Rewriting 23
2.1.3 Binary Instrumentation
is used to gather information about the behavior and performances of an executable
by adding code to the compiled binary. Instrumentation is meant to track and
record data about program execution, such as function call information, memory
accesses, and performance metrics, which can be used for a variety of purposes, such
as debugging, testing, profiling, and other security analysis. The instrumentation
process is performed at the machine code level, making it platform-independent,
and can be used to analyze a wide range of software, including low-level system
code, firmware, and high-level applications. However, binary instrumentation can
also introduce overhead at the program execution, as the added code can slow down
the program and consume more memory.
2.2 Analysis Levels
The software security analysis of an embedded system can be performed at different
levels: full-system, process level, or application level.
– With full-system emulation, the firmware is meant to run inside a virtual envi-
ronment recreated by an emulator, which is supposed to mimic any component
of the original system, from the processor to the hardware peripherals. This
approach is able to test the system in every aspect, allowing to generate the
same data and behavior of the original target system, but it is the hardest to
achieve, as well as slower when compared to other levels of emulation. Full-
system emulation is possible by means of base emulators such as QEMU and
Simics, although they only provide a small set of peripherals, which does not
cover the wide and diverse range of possible hardware in embedded systems.
– The analysis at process level allows the emulation of specific processes’ behavior
inside the target system. The execution of the processes can be performed inside
the native system or a different hardware platform with an operating system
providing an execution environment that resembles the native one. Emulators
such as QEMU and Simics allow the process level analysis though the user
mode emulation. Process level emulation is faster than the full-system emulation;
however, the results of the analysis may differ from the reality if the emulated
execution environment is not faithful to the original, thus compromising the
vulnerability discovery process.
– Analysis at the application level consists in analyzing the single application
that can run in the native target system. This can be done both statically, by
extracting application-specific data, and dynamically, by running the application
itself. The limitation of the static approach is that reducing the analysis to the
evaluation of statically extracted data can detect existing vulnerabilities in the
specific application, but not within the system that interacts with that application.
In the case of dynamic analysis, the execution is usually carried in the native
24 C. Greco et al.
execution environment. This analysis level is faster than the others, since it does
not require the emulation of the system or process; however, the emulation may
not be accurate if the target application depends on native hardware features not
supported by the execution environment.
A problem encountered when full-system emulation is not involved is that the
host platform used to run the firmware is not necessarily capable of supporting
the use of dynamic analysis tools. This results from the fact that IoT systems are
generally lightweight and have reduced computing and storage resources, compared
to traditional systems.
3 Motivation
Firmware re-hosting in IoT devices constitutes a significant challenge due to the
wide heterogeneity involved in both hardware and software components, especially
in comparison to desktop and mobile systems where the standardized execution
environments and limited number of operating systems and architectures make the
issue much easier to handle. The creation of a single generic emulator that is capable
of hosting transparently a certain firmware turns out to be a highly impractical
goal due to the remarkable diversity of existing embedded systems and architecture
designs, as well as the proprietary nature of some chip designs. This diversity results
from the combination of various hardware architectures (x86, ARM, MIPS, and
so on), different types of embedded peripherals, multiple operating systems, and
customized configurations and interfaces. The conjunction of these factors leads to
a long list of realizable embedded systems, making it challenging to design a general
emulator for firmware re-hosting.
These challenges have encouraged the development of a variety of emulation
solutions. A widely adopted approach is the hardware-in-the-loop (HITL) [9–12],
in which the firmware is only partially emulated and whenever an unsupported I/O
operation is attempted, the request is redirected to the hardware itself. In HITL,
the firmware interacts with a hardware platform that mimics the real hardware. The
platform might be an actual piece of hardware or a hardware simulator and provides
the necessary peripheral interfaces and inputs/outputs that the firmware requires
to function correctly. This lessens the need for access to the real target hardware
and enables testing and evaluation of the firmware in a controlled environment.
Various studies employ operating system and/or hardware abstractions [13–15] to
take advantage of the abstraction layer provided by the firmware. An operating
system abstraction is a layer of software that provides a standard interface for the
firmware to interact with the hardware, while a hardware abstraction provides a
similar layer of abstraction, but it is specific to the hardware being used. Such
abstractions provide high-level representations of the underlying hardware, thus
enabling the firmware to interact with the hardware in a manner that is independent
Firmware Dynamic Analysis Through Rewriting 25
of the actual hardware’s implementation. Conversely, other studies aim for full-
system emulation [16–18] without the presence of actual hardware. With full-system
emulation, the behavior of an entire embedded system, including the firmware and
the underlying hardware, is recreated in a virtual environment. These works focus
on the automated creation of models that describe the interactions between firmware
and hardware, allowing for the replaying of these interactions without a direct
connection to the specific device or even the learning of these interactions through
models generated from recorded real interactions. Once the software and hardware
models have been created, they can be integrated into a virtual environment that
simulates the behavior of the real system. The virtual environment can be used to
test and evaluate the firmware in a manner that is independent of the underlying
hardware. This makes it possible to test and evaluate firmware on different hardware
platforms, or even on platforms that do not yet exist, without the need for access to
the actual hardware.
Although much improvement has been made to handle the firmware re-hosting
challenge, the current approaches come with limitations. HITL-based solutions are
effective in enabling interactivity and allowing testers to utilize dynamic analysis
tools to input data into the firmware. However, this method introduces latency in
the forwarding process, thus impeding the execution speed, reducing parallelism
and scalability, and limiting its performance as a testing approach. Furthermore,
HITL still entails a substantial tie between the firmware and hardware. Methods
that rely on operating system or hardware abstractions exceed the HITL drawbacks,
though these approaches have limitations in terms of the types of firmware they
can handle. Indeed, in order to accommodate a broad spectrum of firmware, it
is crucial for emulators to be devoid of high-level abstractions. Finally, learning-
based solutions still require interactions with real hardware to collect data on
the peripherals’ behavior. To overcome the limitations of the currently adopted
methodologies for re-hosting, we propose a novel approach that takes advantage
of binary rewriting techniques. Binary rewriting is meant to modify the behavior of
a compiled program without having its source code or recompiling it while keeping
the binary executable. Binary rewriting can be classified as either static or dynamic.
Static binary rewriting involves making modifications to the binary file and saving
them permanently, while dynamic binary rewriting modifies the binary while it’s
being executed, without making any permanent change.
Several binary rewriting methodologies have been proposed in literature, includ-
ing both static [19–23] and dynamic [24–28] solutions. Binary rewriting can be
applied in a variety of ways, such as monitoring a program during execution,
optimizing it, and emulating it [29].
Finally, many proposals build the emulation on top of base emulators such as
QEMU [30] and Simics [31]. The exclusive use of these tools for hardware emu-
lation is indeed discouraged since they are only able to emulate a restricted list of
CPUs and peripherals and support a limited number of possible configurations [30]
and may also require significant analyst intervention [31].
26 C. Greco et al.
4 State-of-the-Art Approaches and Their Limitations
Firmware re-hosting has drawn a lot of interest in the literature, and various works
came up introducing solutions to address the hardware emulation challenge with the
ultimate goal of enabling faster security assessment of firmware.
HITL-Based Approaches
Several works such as Avatar [9], Avatar2 [32], PROSPECT [10], SURRO-
GATES [11], Inception [33], and Charm [12] pursue the HITL approach, proposing
partially emulation of firmware with unsupported I/O requests redirected to real
peripherals, which still implies a strong dependence from the hardware. These works
propose, in order to conduct dynamic analysis, to partially offload the execution of
firmware to actual hardware, thus compromising scalability.
Abstraction-Based Approaches
Other works design emulation on top of OS abstractions [13, 14] or hardware
abstraction layers (HALs) [15]. Firmadyne [13] is a full-system emulation tool for
automated large-scale dynamic analysis of firmware. When a firmware image is
provided to Firmadyne, the tool extracts the file system and performs an analysis to
determine the hardware specifics. Then, a pre-built Linux kernel that corresponds to
these specifics is employed, and an initial emulation is conducted using QEMU to
infer the system and network configuration. Costin et al. [14] present a framework
for scalable security testing of embedded web interfaces using dynamic analysis
tools. The framework relies on the emulation of firmware images via QEMU,
by replacing the system native kernel with a default kernel – for a specific CPU
architecture – supported by QEMU. The limit of relying on OS abstraction is
that only a reduced number of firmware are supported by the framework; indeed
only Linux-based firmware is handled by both [13, 14]. HALucinator [15] relies
on a technique called high-level emulation (HLE) to perform dynamic analysis
on firmware in embedded systems. The authors leverage the use of hardware
abstraction layers (HALs) commonly used by firmware developers to simplify their
jobs, as a basis for re-hosting and analyzing firmware. The technique works by
first identifying the library functions responsible for hardware interactions in a
firmware image and then providing high-level replacements for these functions in a
full-system emulator such as QEMU. The authors demonstrated the practicality of
HLE for security analysis by supplementing their prototype system, HALucinator,
with a fuzzer to locate multiple previously unknown vulnerabilities in firmware
middleware libraries. In [34] a technique called para-rehosting to make re-hosting of
microcontroller (MCU) software to commodity hardware smoother is proposed. The
authors implemented a portable MCU (PMCU) using the POSIX interface, which
models common functions of the MCU cores and accurately replicates the common
behaviors of real MCUs. They abstracted and modeled common functions of MCU
cores and proposed HAL-based peripheral function replacement, in which high-
level hardware functions are replaced with an equivalent back-end driver on the host,
allowing for incremental plug-and-play library porting. Both [15] and [34] present
Firmware Dynamic Analysis Through Rewriting 27
interactive and hardware-independent environments. However, these environments
are built over the assumption that the firmware relies on HALs, which may not
always be the case. In order to be able to deal with a wider range of firmware,
emulators should be abstraction-free, meaning that they should not depend on high-
level constructs.
Learning-Based Approaches
Other approaches have been proposed to automatically create emulators for
firmware, which require knowledge of how the firmware interacts with the
peripherals, by capturing and reproducing data generated during I/O interactions to
model the hardware behavior. This allows large-scale and interactive executions,
but inevitably necessitates trace recording from within the device itself, thereby
restricting accurate execution in the emulator to only the recorded program paths.
In [16–18, 35, 36] the peripherals’ behavior is learned and the interactions between
the firmware and hardware are modeled in order to enable virtualized execution
of firmware without implement peripheral emulators at all. Pretender [16] and
Conware [35] gather a set of observations of the low-level interactions between the
firmware and the original peripherals by means of HITL or code instrumentation.
Pretender then utilizes machine learning to generate models of the memory-
mapped input/output (MMIO) operations and interrupt-driven peripherals, which
can replace the physical hardware. In contrast, Conware generates composable
automata representations based on the collected recordings to model the peripherals,
which can be merged to build generalized models. Both Pretender and Conware
intend to emulate arbitrary firmware without having to instrument the actual
firmware; however, they imply an access to the physical hardware during the
training phase in order to gather the observations needed for the emulation and
require instrumentation to detect interactions with the hardware. Also, Pretender
only provides models for interrupt-based firmware that therefore are not generic
and suitable for non-interrupt-based firmware. P2IM [17] is a framework for MCU
firmware approximate emulation that does not provide model for the physical
peripherals themselves, but treats them as black boxes. Whenever the running
firmware requests interactions with the peripherals, it is provided with acceptable
inputs that simply satisfy internal checks and do not cause the execution to halt
or crash. P2IM does not require a deep knowledge about the peripherals behavior,
allowing only a small set of values to be considered as possible inputs to the
firmware. Although this functioning allows hardware-independent emulation, it
reduces the ability to represent effectively complex firmware logic.
Symbex-Based Approaches
Other works achieve firmware re-hosting by means of symbolic execution [18, 36,
37]. Laelaps [18] and Jetset [36] propose a symbolic execution-based approach
to infer the behavior of the peripheral expected by the firmware. Laelaps is a
concolic execution-based firmware re-hosting framework that combines concrete
and symbolic execution. It uses a full-system emulator such as QEMU to run
the firmware and gain the inner state of the execution and switches to symbolic
execution whenever an access to an unimplemented peripheral is attempted, in
28 C. Greco et al.
Table 1 Strengths and limitations of existing approaches
Approaches Articles Strengths Limitations
HITL [9], [32], [10],
[11], [33], [12]
.− Interactivity;
.− Dynamic analysis
enabled
.− Hardware dependency
.− Limited scalability
.− Latency
Abstraction-based [13], [14], [15] .− Hardware independence .− Abstractions not
always available
Learning-based [16–18, 35, 36] .− Hardware independence
after training
.− Hardware dependency
during dataset recording
Symbex-based [18, 36, 37] .− Extensive path
exploration
.− Possibility to automate
test case generation
.− Potential path
explosion
.− Problematic complex
behavior modeling
order to find valid input that leads the execution to a path that resembles a realistic
behavior. In order to prevent the path explosion, Laelaps relies on the Context
Preserving Scanning Algorithm (CPSA) heuristics, which are able to infer inputs
valid for the near future execution, but which may cause execution crash in the
long term. Jetset is a tool that relies on symbolic execution for inferring how the
peripheral devices are expected to behave in the interaction with the firmware. These
inferred are used while the firmware is running into an emulator such as QEMU in
order to reproduce a device target functionality. Path explosion is mitigated using
guided symbolic execution with a variation of Tabu Search to minimize the distance
to the goal. However, following this approach, at each branch, the direction to take
is chosen by looking at the distance to the goal, which can make it difficult to model
more complex behaviors. .μEmu [37] uses symbolic execution to extract valuable
information and to build a knowledge base that is used to emulate the peripherals
behavior during firmware re-hosting. As it carries the knowledge extraction process,
.μEEmu tries to avoid the path explosion by switching to another path only when
the current one is found invalid. However, it fails to emulate complex peripherals’
behaviors (Table 1).
5 Our Approach
We propose an approach to enable dynamic security assessment techniques such as
fuzzing on firmware which extends and somehow redefines our proposal in [38]. Our
proposal relies on binary rewriting to obtain a full-system emulation of firmware,
ensuring hardware independence as well as interactivity and overcoming the
limitations experienced in the current approaches. By providing innovative insights
and practical solutions to the security concerns surrounding IoT devices, this chapter
contributes directly to the advancement of knowledge and practices within the realm
Firmware Dynamic Analysis Through Rewriting 29
of the device edge-cloud continuum. Our research underscores the significance of
addressing vulnerabilities in IoT devices and highlights the potential for improved
paradigms, architectures, and applications in ensuring the integrity and resilience of
this interconnected ecosystem. Besides the drawbacks already discussed in Sect. 3,
most of the current approaches involve the use of binary instrumentation to intercept
the invocations to functions related to I/O interactions with peripherals and forward
them to their replacement models. Since the instrumented code is meant to be
executed in a virtual environment outside the firmware execution environment, its
use is significantly slowing down the process.
In our proposal, we completely bypass that step by fully replacing the interactions
between the firmware and the hardware with code. We integrate the embedded
peripherals’ behavior at a high level by firmware rewriting, avoiding the involve-
ment of lower-level abstractions such relying on OS assumptions or using HALs. In
this way, we enable the application of vulnerability assessment techniques based on
a large number of executions and possibly crashes of the binaries under analysis.
The process involves the following steps: (i) Portions of the binary code that
constitute interactions with the peripherals must be identified. This step can be
accomplished through various methods such as manual reverse engineering, debug-
ging firmware in a hardware-in-the-loop environment, or locating HAL functions
via library matching. (ii)Once these functions are identified, their behavior must be
rewritten to ensure successful emulation of the firmware, which is a challenging
task that relies on manual development due to its difficulty to automate. However,
literature suggests that several approaches can be used to automatically develop
models to replace the original functions by recording firmware interactions with
hardware peripherals, as discussed in Sect. 4. (iii)To perform the actual replacement
of functions that interact with hardware with ad hoc implementations, we rely on
binary rewriting techniques. This is the most significant aspect of our proposal
as it avoids intercepting calls using binary instrumentation approaches, thereby
significantly speeding up the emulation process. The binary rewriting step can be
performed in several ways, some of which have already been introduced in Sect. 4.
The most significant benefit of rewriting, compared to current approaches, is that
the firmware can be treated as normal software from an emulation perspective. As a
result, the entire re-hosting process is much faster, and vulnerability assessment
techniques such as fuzzing can be easily adopted without incurring excessive
slowdown due to binary instrumentation.
In fact, what happens during the normal operation of a device is that numerous
interactions occur with the hardware. By choosing to use binary instrumentation
techniques to identify these interactions, the peripheral emulation process undergoes
a significant slowdown.
For example, consider an industrial control system (ICS) that leverages REST
APIs to expose the status of a humidity sensor. In particular, when a request for
humidity data is held through such APIs, the ICS calls a library function supplied
by the manufacturer, which in turn invokes another function meant to interact with
the peripherals, which is an HAL function provided by the microcontroller vendors,
30 C. Greco et al.
ICS Firmware
Application
Middleware
HAL
Hardware
REST Rewritten
Functions
Network Library Humidity Sensor
Library
Network HAL Serial HAL
Network Device Serial Device
Replaced by
Fig. 1 Example: ICS rewritten firmware
able to read the humidity value from the chip, by means of a serial communication.
It is well known that the REST service relies on the HTTP protocol, which is
implemented on top of a TCP communication that in our example is put on in a
library using an HAL to communicate with the Ethernet port. Even considering
such a simple scenario, we can identify several interactions between firmware
and hardware, e.g., function calls to the microcontroller HAL and communication
with the Ethernet port. Our idea consists in replacing these functions with custom
implementations, in order to be able to dissolve the firmware-hardware bonds and
achieve complete independence from the underlying hardware, as illustrated in
Fig. 1. By rewriting the functions related to I/O interaction, we are able to achieve
firmware emulation without the need of instrumenting the emulation.
Firmware Dynamic Analysis Through Rewriting 31
6 Conclusions and Future Work
We inspected the state-of-the-art of firmware re-hosting for vulnerability assessment
purpose, and we analyzed extensively the strengths and weaknesses of existing
solutions. In particular, we have analyzed how the use of dynamic analysis tech-
niques that require a high number of executions to be performed quickly as occurs
in fuzzing turns out to be impractical. Our research offers an alternative solution
that utilizes binary rewriting to address the firmware re-hosting challenges, thereby
enabling firmware to be tested using dynamic analysis techniques more efficiently.
The chapter aims to initiate a dialogue in the field of rapid firmware fuzzing and
posits that the proposed methodology can enhance the existing vulnerability assess-
ment methodologies of analysts. Currently, we are in the process of implementing
the proposed approach and identifying the optimal techniques to employ at each
stage of the rewriting process. Our preliminary findings are promising and show the
effectiveness of our proposal.
References
1. G. Fortino, A. Guzzo, M. Ianni, F. Leotta, M. Mecella, Exploiting marked temporal point
processes for predicting activities of daily living, in 2020 IEEE International Conference on
Human-Machine Systems (ICHMS) (IEEE, 2020), pp. 1–6
2. G. Fortino, A. Guzzo, M. Ianni, F. Leotta, M. Mecella, Predicting activities of daily living
via temporal point processes: approaches and experimental results. Comput. Electr. Eng. 96,
107567 (2021)
3. G. Fortino, A. Guerrieri, P. Pace, C. Savaglio, G. Spezzano, IoT platforms and security: an
analysis of the leading industrial/commercial solutions. Sensors 22(6), 2196 (2022)
4. Y. He, Z. Zou, K. Sun, Z. Liu, K. Xu, Q. Wang, C. Shen, Z. Wang, Q. Li, {RapidPatch}:
firmware hotpatching for {Real-Time} embedded devices, in 31st USENIX Security Sympo-
sium (USENIX Security 22) (2022), pp. 2225–2242
5. A. Guzzo, M. Ianni, A. Pugliese, D. Saccà, Modeling and efficiently detecting security-critical
sequences of actions. Futur. Gener. Comput. Syst. 113, 196–206 (2020)
6. M. Salehi, L. Degani, M. Roveri, D. Hughes, B. Crispo, Discovery and identification of
memory corruption vulnerabilities on bare-metal embedded devices. IEEE Trans. Dependable
Secure Comput. 20(2), 1124–1138 (2022)
7. [Online]. Available: https://lcamtuf.coredump.cx/afl/
8. Y. Shoshitaishvili, R. Wang, C. Salls, N. Stephens, M. Polino, A. Dutcher, J. Grosen, S. Feng,
C. Hauser, C. Kruegel, G. Vigna, SoK: (state of) the art of war: offensive techniques in binary
analysis, in IEEE Symposium on Security and Privacy (2016)
9. J. Zaddach, L. Bruno, A. Francillon, D. Balzarotti et al., Avatar: a framework to support
dynamic security analysis of embedded systems’ firmwares, in NDSS, vol. 14 (2014), pp. 1–16
10. M. Kammerstetter, C. Platzer, W. Kastner, Prospect: peripheral proxying supported embedded
code testing, in Proceedings of the 9th ACM Symposium on Information, Computer and
Communications Security (2014), pp.329–340
11. K. Koscher, T. Kohno, D. Molnar, {SURROGATES}: enabling {Near-Real-Time} dynamic
analyses of embedded systems, in 9th USENIX Workshop on Offensive Technologies (WOOT
15) (2015)
32 C. Greco et al.
12. S.M.S. Talebi, H. Tavakoli, H. Zhang, Z. Zhang, A.A. Sani, Z. Qian, Charm: facilitating
dynamic analysis of device drivers of mobile systems, in 27th USENIX Security Symposium
(USENIX Security 18) (2018), pp. 291–307
13. D.D. Chen, M. Woo, D. Brumley, M. Egele, Towards automated dynamic analysis for linux-
based embedded firmware, in NDSS, vol. 1 (2016), pp. 1–1
14. A. Costin, A. Zarras, A. Francillon, Automated dynamic firmware analysis at scale: a case
study on embedded web interfaces, in Proceedings of the 11th ACM on Asia Conference on
Computer and Communications Security (2016), pp. 437–448
15. A.A. Clements, E. Gustafson, T. Scharnowski, P. Grosen, D. Fritz, C. Kruegel, G. Vigna,
S. Bagchi, M. Payer, {HALucinator}: firmware re-hosting through abstraction layer emulation,
in 29th USENIX Security Symposium (USENIX Security 20) (2020), pp. 1201–1218
16. E. Gustafson, M. Muench, C. Spensky, N. Redini, A. Machiry, Y. Fratantonio, D. Balzarotti,
A. Francillon, Y.R. Choe, C. Kruegel et al., Toward the analysis of embedded firmware through
automated re-hosting, in 22nd International Symposium on Research in Attacks, Intrusions and
Defenses (RAID 2019) (2019), pp. 135–150
17. B. Feng, A. Mera, L. Lu, {P2IM}: scalable and hardware-independent firmware testing via
automatic peripheral interface modeling, in 29th USENIX Security Symposium (USENIX
Security 20) (2020), pp. 1237–1254
18. C. Cao, L. Guan, J. Ming, P. Liu, Device-agnostic firmware execution is possible: a concolic
execution approach for peripheral emulation, in Annual Computer Security Applications
Conference (2020), pp. 746–759
19. E. Bauman, Z. Lin, K.W. Hamlen et al., Superset disassembly: statically rewriting x86 binaries
without heuristics, in NDSS (2018)
20. J.R. Larus, T. Ball, Rewriting executable files to measure program behavior. Softw.: Pract.
Experience 24(2), 197–218 (1994)
21. G. Ravipati, A.R. Bernat, N. Rosenblum, B.P. Miller, J.K. Hollingsworth, Toward the
deconstruction of dyninst. University of Wisconsin, Technical report, vol. 32, 2007
22. D.W. Wall, Systems for late code modification, in Code Generation–Concepts, Tools, Tech-
niques: Proceedings of the International Workshop on Code Generation (Springer, London,
1992), pp. 275–293
23. L. Van Put, D. Chanet, B. De Bus, B. De Sutter, K. De Bosschere, Diablo: a reliable,
retargetable and extensible link-time rewriting framework, in Proceedings of the Fifth IEEE
International Symposium on Signal Processing and Information Technology, 2005 (IEEE,
2005), pp. 7–12
24. K. Scott, J. Davidson, Strata: a software dynamic translation infrastructure, in IEEE Workshop
on Binary Translation (2001)
25. C. Cifuentes, B. Lewis, D. Ung, Walkabout-a retargetable dynamic binary translation frame-
work, in Workshop on Binary Translation (2002), pp. 22–25
26. J.K. Hollingsworth, B.P. Miller, J. Cargille, Dynamic program instrumentation for scalable
performance tools, in Proceedings of IEEE Scalable High Performance Computing Conference
(IEEE, 1994), pp. 841–850
27. B. Buck, J.K. Hollingsworth, An API for runtime code patching. Int. J. High Perform. Comput.
Appl. 14(4), 317–329 (2000)
28. C.-K. Luk, R. Cohn, R. Muth, H. Patil, A. Klauser, G. Lowney, S. Wallace, V.J. Reddi,
K. Hazelwood, Pin: building customized program analysis tools with dynamic instrumentation.
ACM SIGPLAN Not. 40(6), 190–200 (2005)
29. M. Wenzl, G. Merzdovnik, J. Ullrich, E. Weippl, From hack to elaborate technique–a survey
on binary rewriting. ACM Comput. Surv. (CSUR) 52(3), 1–37 (2019)
30. F. Bellard, QEMU, a fast and portable dynamic translator. in USENIX Annual Technical
Conference, FREENIX Track, vol. 41, no. 46. Califor-nia, USA (2005), pp. 10–5555
31. P.S. Magnusson, M. Christensson, J. Eskilson, D. Forsgren, G. Hallberg, J. Hogberg, F. Lars-
son, A. Moestedt, B. Werner, Simics: a full system simulation platform. Computer 35(2), 50–58
(2002)
Firmware Dynamic Analysis Through Rewriting 33
32. M. Muench, D. Nisi, A. Francillon, D. Balzarotti, Avatar 2: a multi-target orchestration
platform, in Proceedings Workshop Binary Analysis Research (Colocated NDSS Symposium),
vol. 18 (2018), pp. 1–11
33. N. Corteggiani, G. Camurati, A. Francillon, Inception: {System-Wide} security testing of
{Real-World} embedded systems software, in 27th USENIX Security Symposium (USENIX
Security 18) (2018), pp. 309–326
34. W. Li, L. Guan, J. Lin, J. Shi, F. Li, From library portability to para-rehosting: natively execut-
ing microcontroller software on commodity hardware (2021). arXiv preprint arXiv:2107.12867
35. C. Spensky, A. Machiry, N. Redini, C. Unger, G. Foster, E. Blasband, H. Okhravi, C. Kruegel,
G. Vigna, Conware: automated modeling of hardware peripherals, in Proceedings of the 2021
ACM Asia Conference on Computer and Communications Security (2021), pp. 95–109
36. E. Johnson, M. Bland, Y. Zhu, J. Mason, S. Checkoway, S. Savage, K. Levchenko, Jetset:
targeted firmware rehosting for embedded systems, in 30th USENIX Security Symposium
(USENIX Security 21) (2021), pp. 321–338
37. W. Zhou, L. Guan, P. Liu, Y. Zhang, Automatic firmware emulation through invalidity-guided
knowledge inference, in USENIX Security Symposium (2021), pp. 2007–2024
38. G. Fortino, C. Greco, A. Guzzo, M. Ianni, Enabling faster security assessment of re-hosted
firmware, in 2022 IEEE International Conference on Dependable, Autonomic and Secure
Computing, International Conference on Pervasive Intelligence and Computing, International
Conference on Cloud and Big Data Computing, International Conference on Cyber Science
and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech) (IEEE, 2022), pp. 1–6
Performance Analysis of a Blockchain for
a Traceability System Based on the IoT
Sensor Units Along the Agri-Food Supply
Chain
Maria Teresa Gaudio, Sudip Chakraborty, and Stefano Curcio
1 Introduction
The agri-food supply chain can be seen as a complex system of systems (SOS) [1],
and a traceability system along the entire supply chain seems challenging to realize.
Today, different technologies exist to ensure traceability [2–8], but at the same
time, some critical points remain and influence the reliability of the entire system.
In particular, each different product follows a specific supply chain with its
requirements and constraints. Most of the critical points are represented by the
interactions between different actors involved in the supply chain, where the risk
can increase due to the lower automated layers of protection and thus, malicious
[9] – people or others – could intervene, causing fraud and damage to the final
product and the entire supply chain.
The insertion of the IoT sensor unit in correspondence with these critical points
could be interesting concerning the real-time monitoring and consequent checking
of the process and at the same time, the possible integration with blockchain
technology. This last perfectly responds to the four steps of a traceability system –
identification, recording, data links, and report [10, 11] – with its fundamental
principles: immutability and transparency, disintermediation and provenance, and
trust and agreement [12]. Moreover, blockchain technology could represent a
solution that can be implemented in all specific supply chains [13, 14].
This works referred to a multilayered solution for the agri-food supply chain
traceability proposed in [15]. In this paper, a more in-depth description of the
blockchain setup technology in the Hyperledger Fabric environment was described,
and the main results of transaction simulation were presented. Compared to existing
M. T. Gaudio () · S. Chakraborty · S. Curcio
Università della Calabria, Rende, Italy
e-mail: mariateresa.gaudio@unical.it; sudip.chakraborty@unical.it; stefano.curcio@unical.it
© The Author(s), under exclusive license to Springer Nature Switzerland AG 2024
C. Savaglio et al. (eds.), Device-Edge-Cloud Continuum, Internet of Things,
https://doi.org/10.1007/978-3-031-42194-5_3
35
36 M. T. Gaudio et al.
technologies, the multilayer proposed solution is referred to as the whole agri-food
supply chain system, and not to a single stage of the specific supply chain.
2 Step-by-Step Hyperledger Blockchain Setting
To consider a generic scheme solution for all types of supply chains, the entire agri-
food supply chain was considered with only three main different actors involved:
farmer, manufacturer, and distributor. The manufacturer considers the producing
and packaging phase as the only one, and the waste management unit is neglected,
considering the real hypothesis of zero waste in the supply chain.
Using Hyperledger Fabric, these hypotheses are more readily accepted, because
Hyperledger technology is the most scalable among existing blockchain solutions.
Therefore, when it will be necessary for the next, the adding of other organi-
zations will be possible and easy to adopt, if the hyperledger initial setup will be
performed correctly. Moreover, hyperledger can be set as public or private, both
permissioned and permissionless; i.e., it provides very high modularity, capable of
adapting to any need [16].
Unlike other blockchain platforms, hyperledger is an open source hosted by
Linux Foundation. The main problem could be the high memory capacity that the
machine in use must have to be able to memorize the blocks of the chain, to which
new information will be added each time. For this reason, it was decided to start
with the installation of the blockchain solution on the virtual machines (VMs), in
order to preserve the integrity of the machine and at the same time, use a cloud
solution to store all data and blocks. Obviously, also this choice will involve a cost,
but at the same time, this could be managed in the best possible way from time to
time, without going to wear out the machine. This paragraph shows the step-by-step
working procedure for the hyperledger blockchain solution used.
2.1 General Architecture
The general architecture proposed for the agri-food supply chain traceability con-
sists of the management of information from farm to fork, i.e., from the agricultural
phase to the end consumer.
The tracking and reconstruction of the information flow passes through the
interactions between the different actors involved in the entire supply chain.
For this general application, the blockchain chosen is an open-permissioned
Hyperledger Fabric blockchain. And starting from this case study, extra-virgin olive
oil was considered by four organizations:
– Organization 1 (Org1) involves the interactions between the manufacturing
process and the agricultural phase, where the raw material comes from.
Perf Anlys of a BC for a traceability sys based on IoTs alg the Agri-Food SC 37
– Organization 2 (Org2) involves the interactions between the final product coming
from the manufacturing process and the distribution phase.
– Organization 3 (Org3) refers to the interactions involved in the product recogni-
tion, e.g., in the store or a point of consumption.
– Organization 4 (Org4) is the orderer organization.
Each organization has two peers, while the orderer organization has three orderer
peers.
The ordering service is one of the main features of Hyperledger Fabric. It guar-
antees transaction ordering. In fact, Fabric relies on deterministic – not probabilistic
– consensus algorithms; thus, any block validated by the peer is guaranteed to be
final and correct.
Moreover, separating the endorsement of chaincode execution (which happens
at the peers) from ordering gives Fabric advantages in performance and scala-
bility, eliminating bottlenecks which can occur when execution and ordering are
performed by the same nodes.
In brief, this project has three organizations defined which can contain the
information – regarding farming, manufacturing, and distribution – a single orderer
and a single channel for this business network.
The entities will interact with the blockchain application by invoking chaincode
in the fabric network, updating the ledger world state, and writing transaction logs.
The blockchain network was set up through the use of 4 Google VMs. In the creation
of VMs, it is convenient to identify the geographical area that allows us to spend less.
Among all platforms, Google was chosen thinking about the possibility by a
company already has a Google account, and thus, there is no need to open another
accounting service on another platform. Moreover, Google already returns some
performance evaluations inherent to the machine.
As an operating system, Ubuntu 18.04 LTS was installed on each VM. All VMs
have two vCPU and 4 GB of memory. In contrast, VM1 has 8 GB because it is the
one that will store more data from different operations: the network configuration,
the creation of the crypto-material for each organization, the creation of the channel
artifacts, and the creation of a docker swarm network.
In this network, all ORGs can interact in one channel (see Fig. 1). The channel
was generically called “mychannel.” Docker Compose was used to launch the
corresponding fabric containers, and as the first step, the services run in the container
are defined in a Docker Compose YAML file.
The installation of Hyperledger Fabric was carried out according to the indica-
tions of Hyperledger Fabric manual version 2.3 [17], which already contains all
packages’ installation about prerequisites, among valuable docker for managing the
containers.
The first thing to do is the creation of the artifacts folder for the channel
configuration.
In a config folder, different configuration YAML files are present, and they
are used to track what kind of flow of information is necessary to follow, i.e.,
which chaincode deployment you want to perform. To make it, Membership Service
Provider (MSP) is necessary for each organization.
38 M. T. Gaudio et al.
Org1 Org2 Org3 Org4
Interactions between
manufacturing and
farming
Interactions between
manufacturing
and distribution
Interactions involved in
the final consumer
recognition
Orderer
Organization
Channel
Docker Swarm Network
Fig. 1 Hyperledger Fabric architecture for a generic agri-food supply chain
First of all, let us start with the creation of the crypto-materials for each
organization. Thanks to the Docker Compose YAML file and a shell script file
(.sh) – written in a bash language and executable directly in the terminal – the
certificate authorities (CAs), MSP, and Transport Layer Security (TLS) certificates
are generated in a special crypto-config folder for each organization.
After the certificate’s creation, it is time to create Genesis block and channel
transaction files. Through another shell script file, named “create-artifacts,” the
Genesis block, Genesis channel configuration, and Anchor Peer transactions are
generated. This file configures a single anchor peer for each organization.
The anchor peer is more critical in Fabric, because it is a peer node that enables
the communication between peers of different organizations and discovers all active
channel participants. At this point, it is essential to share all information created
among all different organizations, through the creation of a particular repository on
GitHub, or uploading each organization’s certificates to the others or again, creating
at the starting point only a first VM and then, cloning its disk and creating from it
the other VMs.
The creation of a docker swarm network is necessary to orchestrate each
instruction in each VM and therefore, in each organization.
After the docker swarm network creation, all containers have to run on each
organization. It was made through another appropriate Docker Compose YAML
file. In this Docker Compose file, each peer has a CouchDB database, where the
information is collected.
2.2 Smart Contract Operations
In Hyperledger Fabric, the smart contract is called “chaincode,” and for this case
study, a specific chaincode, called “Foodchain,” was written in the Go language.
Fabric also supports other programming languages, but it was demonstrated that Go
is the most powerful for this technology [18].
Perf Anlys of a BC for a traceability sys based on IoTs alg the Agri-Food SC 39
Table 1 General foodchain structure algorithm in natural language
Algorithm 1 – Foodchain
1: INITIALIZE the executable function through the main package
2: IMPORT all necessary Go packages to execute different functions
3: DEFINE the FoodchainContract function to manage the Asset,
i.e., the information coming from IoT Sensor Units
4: DEFINE user-defined type structures
to store a collection of the product and the participants
5: INITIALIZE of FoodchainContract
6: INVOKE createProduct function
to create the block and to add information for a certain product
7: INVOKE manufactureProcessing function
to add information about the manufacturing process
8: INVOKE distributorProcessing function
to update information about the distribution state
9: INVOKE query function
about reading the status of the product at a given time
10: IF query function reads a consistent status of the product at a given
time,
it means that information and condition in input to the blockchain
are correct,
ELSE RETURN error
11: INVOKE setupFoodchainTracer function
to record all information in the same block
12: RETURN all information contained in a block
Go is a perfect programming language to develop fast and scalable blockchain
systems. Go language is not only simple to learn; it also comes with the best
features of JavaScript and Python, such as user-friendliness, scalability, stability,
and speed – anything that makes it the best choice to offer tailored-made blockchain
applications.
The Foodchain chaincode is reported and described in natural language, cf.
Table 1.
The generical algorithm of Foodchain also contains other small functions, which
can be written in the same Foodchain or also in the subsequent shell script files
for the working of Hyperledger, that is, during the initializing, invoking, and
committing of chaincode in different involved organizations, valuable steps to
execute the transaction of the asset information.
In these shell script files, when a function is invoked on peers, replacing the
localhost with the VM’s IP address corresponding to the orderer organization is
essential. This replacement is necessary to enable the functionality of the ordering
service among the other organizations.
In each of these shell script files, before each function, the set of environmental
variables and the declaration of paths for each peer of each corresponding organi-
Random documents with unrelated
content Scribd suggests to you:
CAPITULO III.
Situacion de la España—Reinado de Don Fernando y Doña
Isabel—Anarquía—Guerra cívil—Fanatismo—Restablecimiento de
la Inquisicion—Influencia del Clero—Expulsion de los judios y
moros—Odios entre España y Portugal.
Nos es necesario echar una ojeada sobre el País á que se dirigía
Colon y sobre los sucesos de la época en que debía llegar.
El reinado de Enrique IV, llamado el impotente, había sido funesto
para Castilla; él mismo había abierto las puertas de la mas
escandalosa anarquía rebelándose contra su padre. No eran mejores
los ejemplos de su vida privada; había agotado las fuerzas de su
juventud en la mas desenfrenada crápula. Sin mas sucesion que su
hija Juana y aun su legitimidad desconocida al extremo de llamarla
el pueblo y los nobles la Beltraneja, á causa de las intimidades
ostensibles de Don Beltran de la Cueva con la Reyna, fué este
desgraciado vástago en vez de solucion de las cuestiones de
sucesion, causa de trastornos y de guerras.
De ánimo débil, pasó por sucesivas humillaciones que despretijiaron
su autoridad y hacían que tomase colosales proporciones la
anarquía. Hizo primero reconocer á su hermano Don Alfonso como
sucesor al trono, cediendo á las imposiciones de la nobleza y
desconociendo los derechos de su hija.
Muerto Don Alfonso á los quince años de edad, se hizo por las
mismas imposiciones, el pacto llamado de los Toros de Guisando, en
que fué reconocida su hermana Doña Isabel con derecho á la
sucesion del trono pretendiendo salvar su autoridad, con una
claúsula por la cual esta no se casaría sin asentimiento del monarca.
Todos estos resultados venian precedidos de intrigas, asonadas y
crímenes.
Llegó el descontento al extremo de quererse destronar al monarca
para levantar á Doña Isabel, como ya una faccion había proclamado
á Don Alfonso, pero la futura soberana de España tuvo la discrecion
de no prestarse al movimiento.
El matrimonio de la simpática princesa con su primo el infante de
Aragon, Don Fernando, Rey de Sicilia, es un idilio que pocas veces
ocurre en la crónica de los reinos. Don Enrique pretendió que la
princesa se casase primero con el principe de Francia, despues con
Pedro Giron, altivo y rebelde noble que puso esa condicion á su
sometimiento y por último con el Rey de Portugal. La princesa
resistió con energía todas estas imposiciones porque amaba á Don
Fernando de Aragon y solo con él consentiría en un enlace.
Para evitar las persecuciones é intrigas de la Corte hizose venir al
Infante secretamente, corriendo serios peligros y con la proteccion
de los nobles que le eran adictos en Castilla, celebraronse las
nupcias que unian por lo pronto dos ardientes corazones y que mas
tarde debian unir dos reinos, formando uno tan grande que en él
jamas el sol tendria ocaso.
Muerto Don Enrique IV en Diciembre de 1474 fué, en la ciudad de
Segovia, proclamada Reyna de Castilla Doña Isabel, no sin que al
mismo tiempo ambiciosos viniesen á disputarle el trono, so pretesto
de sostener la causa de Doña Juana. La actividad que en esta lucha
demostró la nueva Reyna, probó que ambicionaba ardientemente el
poder y que tenia grandes aptitudes para sobrellevarlo.
Doña Juana habíase esposado con Don Alfonso V Rey de Portugal y
este invadió á Castilla, sostenido por los nobles adictos á ella y
trabóse una guerra de sucesion que probó la impericia militar de
unos y otros. Por último, vencido el Portugues, retiróse á su Corte y
la infeliz Doña Juana, despues de haber sido heredera de un trono,
novia de tantos ambiciosos y desposada de un Rey, concluyó por
buscar la paz del alma en un Monasterio.
Fallecido en Enero de 1479 el Rey de Aragon Don Juan II, fué
elevado al trono Don Fernando y produjose así la unidad Española.
En todo este movimiento vése por único actor á la casualidad. A Don
Enrique sucederle debia su hija Juana y en defecto de ella, su
hermano Don Alonso, jóven sensato, que apesar de su corta edad
tuvo bastante carácter para rechazar mas de una infamia; hubiese
sido un buen Rey y no llegó á ser sinó una esperanza frustrada sin
que falten historiadores que atribuyan al veneno su prematura
muerte. En tal caso Doña Isabel hubiese sido otra monja como Doña
Juana ó hubiese optado por ser Reyna de Portugal, casandose con el
viejo monarca que la pretendia. Entónces la union de los Reinos de
Aragon y Castilla efectuado ipsofacto por su matrimonio con el
Príncipe, no se hubiese realizado, sin que hubiesen tenido lugar
muchos de los sucesos que vamos á referir.
Prescott en la historia de los Reyes Católicos, dá al reinado de Doña
Isabel un orígen electoral, cosa que en verdad no es asi, pues toda
la autoridad de Doña Isabel se derivó del célebre pacto de los Toros
de Guisando, infringido no obstante por la misma agraciada en la
cláusula que exigía la intervencion de Don Enrique en su
matrimonio. Si casualidad fué todo, pocas veces ha dado orígen á
tanto bien y á tanto mal.
A situacion tan espantosa, como la dejada por el reinado que
caducaba, requeriase un gobierno enérgico y justo, que salvase el
principio de autoridad, desconocido por la terrible anarquía que
destrozaba la Península Ibera y los Reyes Católicos, que muchos y
muy grandes errores debian cometer, eran no obstante justos y
enérgicos.
Todos los historiadores están contestes en el tétrico cuadro que
ofrecía la España al morir Don Enrique. La seguridad de las personas
y de las cosas era mayor entre las hordas salvajes que en sus
campos y aun en sus ciudades; los mismos nobles mandaban desde
sus castillos robar y asesinar á los viajeros; el feudalismo estaba en
su apojeo; los tribunales por prevaricaciones escandalosas ó por
miedo no servian sinó para alentar la injusticia y el crímen; la
industria decaida, el comercio abatido; una crísis espantosa á causa
de que cada noble acuñaba la moneda á su antojo, depreciándose
esta al extremo de que las transaciones se hacian, como en los
tiempos primitivos, por trueque ó cambio.
El Clero era un poder, el único poder, la única autoridad, al extremo
de que criminales vestian el hábito sin profesar para escudarse y
quedar impunes. Los maestrazgos de las órdenes religioso-militares,
recibian del Papa su autoridad; no se sometian al Gobierno y
acumulaban grandes riquezas. En fin, si se quiere una imágen del
cáos, busquese en esa época de la historia de España, sobre todo en
Castilla y Andalucia.
Los Reyes Católicos acometieron la tarea de domar esa anarquía y
ya con rigor, ya con blandura; ya confirmando fueros y derechos á
las ciudades, ya despojando á los nobles de sus derechos feudales,
ya reconciliando los magnates enemistados, ya sometiendo á los que
gobernaban por su cuenta incluso al altivo conde de Cádiz, ya
prestigiando los tribunales de justicia, ya reformando los
procedimientos y leyes civiles; en pocos años, la misma admiracion
que nos ha causado el desquicio del gobierno de Don Enrique, nos
asalta al ver las reformas obtenidas por los Reyes Católicos. Apesar
de su energía, Doña Isabel nada hubiese conseguido sin la union del
Reyno de Aragon; habiase allí refugiado lo mas sensato y patriota de
la nacion Española; su constitucion liberal, su riqueza de que era
emporio el puerto de Barcelona, todo eso reflejaba prestigio sobre
ella y era un contrapeso poderoso; los nobles y el pueblo mismo de
Castilla, sabían que en un caso dado, un ejército Aragonés vendría á
apoyar á la Soberana y véase en esto una demostracion de como la
anarquía, hija siempre de la desmembracion social, cesa cuando la
unidad se restablece.
Dos episodios citaremos para demostrar que estos Soberanos si bien
dotados de grandes cualidades, no eran aptos para mejorar la
situacion del Pais.
Los obispados de España se proveian sin anuencia del Soberano, y si
los Reyes Católicos reivindicaron ese derecho, no se descubre en ello
sinó la influencia del Clero Español, interesado en esa reivindicacion
porque era pospuesto por prelados de Roma. Los Reyes estaban
sometidos á esa influencia al extremo de que el confesor de Doña
Isabel, nuevamente nombrado, Fray Fernando de Talavera, cuando
por primera vez fué á ejercer su ministerio, permaneció sentado para
escuchar la confesion:—La costumbre es—dijo Doña Isabel—que
ambos permanezcamos arrodillados.—Nó—exclamó el confesor—yo
soy ministro de Dios y este su tribunal y V. A. debe permanecer de
rodillas y yo sentado. La Reyna se arrodilló.
Doña Isabel tenia, no hay duda grandes condiciones pero no era
superior á su época, estaba muy á su nivel. La España debía
permanecer siempre con los gérmenes de la anarquía, contenidos
pero no extirpados; el fanatismo debia acrecentarse tanto mas
cuanto mas quisiese hacerse de la religion elemento social.
Es asi que el restablecimiento de la Inquisicion hizo á este poder
mas irresistible que en las épocas anteriores. Algunos historiadores
para disculpar á Doña Isabel dicen que fué á requisicion del Papa
que se hizo este restablecimiento; no hay tal, existen aun los
documentos que prueban que fué á peticion de la misma Doña
Isabel que se dió la bula que debia levantar en Torquemada, el
déspota, el tirano mas cruel de los tiempos pasados y futuros.
Estos dos episodios prueban que, ó los Reyes Católicos no eran tales
como los representa la historia, sinó crueles y sanguinarios ó que
estaban tan dominados por el Clero como Don Enrique lo estaba por
los nobles rebeldes. Destruido un feudalismo, levantaban otro cien
veces peor; quitada á los nobles la horca y cuchillo, ponian en
manos de los Inquisidores la tea para encender las hogueras del
martirio.
No faltan historiadores que fascinados por el prestigio de los grandes
acontecimientos que la casualidad hizo producir en el reynado de
Doña Isabel, quieran atenuar esta mancha, echando la culpa á la
época. Nó, la moral y la justicia son eternas y no tenemos otra regla
para juzgar los hechos de cualquier tiempo. No fueron menos graves
otros errores cometidos por los Reyes Católicos; la expulsion de
España de los Judios y de los Moros, las persecuciones inhumanas
contra esos desgraciados, el saqueo de sus propiedades, son hechos
que bastan para borrar la poca gloria que se les atribuye en la
unidad de España y en el descubrimiento de América.
La misma guerra contra los Moros refugiados en Granada, no se
llevaba con tanto celo al principio; fué necesario que algunos nobles
por si y ante si la iniciasen con la toma de Alhama, para decidir al
Monarca á ponerse en campaña y en toda esa guerra cuesta
discernir el fanatismo del amor patrio.
Ni faltaron tampoco los estragos de la guerra cívil en este Reynado,
bastando para comprobarlo que citemos el movimiento separatista
que inició en Galicia el mariscal Pardo de Cela, siendo necesario que
se enviase allí un ejército que sufrió un reves y que no pudo triunfar
sinó á merced de una traicion por la cual, aprisionado el separatista,
fué ahorcado sin piedad.
Tal era la situacion en que Cristóbal Colon debia hallar á la España,
agregando que los antiguos odios entre esa Nacion y Portugal
habian recrudecido con la guerra de sucesion de Doña Juana, á
causa de la invasion á Castilla por el Rey Don Alfonso, en proteccion
de esas pretensiones.
CAPITULO IV.
Los Conventos—Llegada de Colon á el de la Rávila—Opinion de
algunos autores—Colon en la Corte—Exámen de su proyecto—
Su rechazo—Nuevas tentativas—Proyecto de marcha—Carta del
Rey de Francia—Aceptacion de su proyecto en principio—
Inconvenientes en la práctica—Aceptacion definitiva del
proyecto.
En aquellos tiempos de miseria y de barbarie, tropezábase
frecuentemente en España y en Italia con altos muros entre los
cuales se incrustaba iglesia gótica y en el interior de ese recinto
hallabase almacenada la abundancia y refugiada la ilustracion, por lo
general teológica, casuítica, fanática, pero á veces en una celda
apartada, como un punto luminoso, se escondia bajo el hábito del
fraile, un sabio ó un artista, único principio vital del porvenir, única
chispa que algun dia restituyese al mundo los resplandores de la luz.
Allí se absorbia el sudor de los labradores y de los artesanos
distribuyéndose en cambio á los vagamundos, algunos bocados de
sopa, ostentacion de caridad calculada para que se redoblasen las
limosnas.
A la puerta de uno de estos edificios del Monasterio de la Rávila, á
corta distancia del puerto de Palos, un dia canicular en 1484
detúvose un peregrino que conducia un niño de la mano. Ni el polvo
que cubria su pobre ropaje, ni la fatiga retratada en su semblante, ni
el dolor que se reflejaba en sus ojos, disminuian la nobleza de su
porte,—¿Que buscaba ese hombre?—¿Era acaso un mendigo?—No
pedia sinó un poco de sombra para reposar y un mendrugo de pan
para el niño.
Habia en ese Convento una luz y con ella se descubrió lo que
buscaba ese viajero en su afanosa peregrinacion; Fray Juan Perez de
Marchena era uno de esos seres refugiados en el Convento, que
vestia el hábito del fraile pero que conservaba el corazon y la
inteligencia libres del fanatismo. Ver al forastero y adivinar en él
todo un drama interesante, fué la concepcion feliz de un momento;
sin duda pensó que tambien el Dante, algun tiempo hacia, habia
buscado igual refugio en Italia.
El peregrino y el fraile se miraron, se explicaron, se comprendieron.
Ese humilde viajero que hallaba asi hospitalidad y apoyo, era
Cristóbal Colon y el niño, su hijo Diego.
Algunos historiadores modernos han querido desconocer este
poético episodio, pretendiendo que Colon desembarcó en el puerto
de Santa María y que fué hospedado en el Palacio del Duque de
Medina-Celi, refiriéndose á un documento que no citan ni describen.
Tal documento no puede ser otro que el que se refiere á las
relaciones que tuvo con dicho Duque mucho despues de su llegada á
España, como mas adelante lo veremos. Por otra parte no es
verosímil que habiendo salido Colon de Lisboa furtivamente,
despreciado por la Corte, sin influencia ni valimiento alguno,
desembarcase en España con el prestigio necesario para hacerse
abrir las puertas del Palacio del orgulloso Duque y encontrarlo
dispuesto á servirlo.
Todo en el reinado de Doña Isabel debia ser obra de la casualidad;
Cristóbal Colon rechazado por el Monarca de Portugal por importuno,
venia á España como vagabundo y como vagabundo llama á las
puertas del monasterio de la Rávila donde halla un hombre que lo
socorre y lo comprende, se encarga de la educacion del hijo, lo
mune de recomendaciones y lo dirige á la Corte.
Entre las recomendaciones que llevaba Colon habia una para aquel
Fray Fernando de Talavera, confesor de la Reyna, de que hemos
hablado ya y no podia ser mejor dirigido el pretendiente que á un
hombre que hacia arrodillar á sus plantas á Isabel para oir su
confesion y darle sus consejos.
Hallábase la Corte en Córdoba y toda la atencion era absorvida por
los cuidados de la guerra contra los Moros de Granada.
El confesor de la Reyna apenas respondió con seca urbanidad á la
recomendacion que se le hacia del marino; ignorante y tan fanático
como de cortos alcances, no le sirvió como pudo haberle servido.
Pero Colon estaba ya en camino y supo captarse la amistad de otras
personas influyentes, entre ellas á Gheraldoni nuncio del Papa, y á
su hermano Alejandro, preceptor de los hijos de los Monarcas y por
intermedio de estos obtuvo una audiencia del Cardenal Mendoza que
tanto valimiento tenia en la Corte que era llamado la tercer potencia.
Mendoza debia ser hombre instruido, al menos de elevado espíritu,
pues escuchó á Colon con atencion, lo exortó á perseverar en sus
planes y obtuvo éste por su intermedio una audiencia de los Reyes.
Colon era elocuente; conocia que para convencer y persuadir es
menester hacer vibrar las fibras mas sensibles del corazon de su
auditorio y halagar sus creencias y aun sus preocupaciones. Así
pues, á los soberanos de Castilla les habló de la gloria de extender
sus dominios; excitóles la avaricia con el acrecentamiento de un
comercio riquísimo; pero en lo que insistió mas y con acento
profético, fué en el triunfo de la fé cristiana, en la conversion de
millares de idólatras y aun en el rescate del Santo Sepulcro. Es
probable que Colon creyese en mucho de lo que decia, pero no hay
duda que exageraba su fé y su ortodoxismo para persuadir. Su larga
permanencia en Portugal le habia hecho adquirir una pronunciacion
y un acento mas semejante al castellano y su trato con españoles,
aun ántes de llegar á España, le permitia expresarse en ese idioma
con bastante claridad y elegancia. La impresion causada en el ánimo
de los Reyes fué favorable, sobre todo en Doña Isabel que era mas
ambiciosa y mas accesible al entusiasmo.
Pero el proyecto de Colon rozaba con puntos de la fé y dado el
fanatismo de los Reyes, no podia ser aceptado sin someterlo al
exámen de peritos.—Pero—¿Que peritos podrian ser en esta materia
teólogos y frailes? Compuesto este tribunal de esta manera y
presidido por el confesor de la Reyna fácil es comprender que el
proyecto de Colon era de antemano condenado.
Admitido á exponer y defender su idea ante el areópago ortodóxo
presentósele otra ocasion de lucir su elocuencia. Esta vez expuso
todas las teorías de Tolomeo y Toscanelli, para demostrar la
practicabilidad del viaje y no poco le sirvió su erudicion en la Biblia
para ayudarse á conciliar sus errores con los nuevos errores que
profesaba. Había esta diferencia grandísima entre unos y otros
errores; que los teológicos cerraban la puerta á todo
descubrimiento; inmovilizaban, aletargaban, envenenaban la vida
como las emanaciones de un lago sin corriente, miéntras que los
errores de la ciencia impulsaban al progreso, admitian nuevas
hipótesis, se encadenaban con las verdades del porvenir. Era una
lucha titánica y sosteniéndola Colon era ya tan grande y tan digno
de la posteridad, como si hubiese realizado ya su descubrimiento.
Pasaban los meses y los años y el Consejo no expedía su dictámen.
Entre tanto Colon abria su alma á dulces sentimientos y consuelos.
Había trabado relacion con una noble y hermosa dama llamada
Beatriz como aquella que inspiró al Dante y fruto de estos amores
fué Don Fernando, que mas tarde hizose estimar por sus méritos y
fué el primer historiador de las hazañas de su padre. Al fin en 1491,
redoblando Colon sus instancias, obtuvo que el Consejo se
expidiese, pero éste fallo le fué completamente adverso.
Al recibir esta noticia, experimentó tanta amargura que, á no ser los
vínculos que lo unian ya á España, la hubiera abandonado como
abandonó á Lisboa.
Tentativas infructuosas con algunos grandes personajes, entre ellos
el Duque de Medina-Celi, lo detuvieron todavía, pero al recibir una
carta del Rey de Francia que lo llamaba, resolvió partirse. Como
recordará el lector, su hermano Bartolomé gestionaba en Inglaterra
la admision de sus proyectos y regresando con éxito ó sin éxito,
había instruido de ellos tambien al Monarca Francés que los aceptó
con entusiasmo.
Partióse pues Colon desandando aquel camino de Córdoba á la
Rávila que había ántes emprendido tan lleno de esperanzas.
Aquellos para quienes la vida no ha sido una contínua lucha, que no
saben lo que es una esperanza salvadora que se desvanece, que no
han contado con un recurso único que se pierde, aquellos que no
han ido á la ilusion y vuelto al descanto por el mismo trayecto, no
podrán hacerse una idea de los tristes pensamientos que asaltarían
la mente de Colon.
Por segunda vez llamó á las puertas del convento de la Rávila y por
segunda vez Fray Juan Perez reanimó las esperanzas del marino.
Consiguió que detuviese su viaje á Francia, envió á pedir una
audiencia á la Reyna, de quien habia sido confesor, y una vez
obtenida, marchóse á la Corte sin detenerse y aun sin esperar el dia
para ponerse en marcha.
Como en todos estos sucesos había algo de providencial, la carta del
Monarca Francés, vino oportunamente y fué sin duda el gran
argumento que empleó el de la Rávila para convencer á la Reyna.
El Portugal era odiado por los Reyes y Pueblo Español, pero la
Francia era mirada con recelo y emulacion, sin duda desde las
guerras de Aragon y de Italia en que Franceses y Españoles se
disputaban el mas rico giron de aquellos paises. Así fué que pensar
en que la Francia acogería á Colon y podría gozar la gloria de su
empresa, despertó los celos de Doña Isabel. Se ordenó que Colon
regresase dándosele seguridad de que sería atendido y
adelantándosele veinte mil maravedies para sus gastos.
Llegó esta vez á la Corte nuestro héroe lujosamente vestido y con
aire de triunfo y hallándose los Reyes entónces frente á los muros de
Granada, allí se dirigió, llegando en el oportuno momento de ser
tomada la ciudad y estarse celebrando alegremente la victoria
decisiva contra los Sarracenos.
Allí tuvo la satisfaccion de ver al fin de tantas peripecias aceptado, al
menos en principio, la proposicion de su descubrimiento.
Delegó la Reyna en varias personas el encargo de tratar las bases y
formalizar el compromiso y otra vez Fray Fernando Talavera debia
presidir el Consejo. Había éste ascendido á arzobispo de la recien
reconquistada Granada, redoblado su influencia pero tambien su
terquedad y su fanatismo. Entre Talavera y Colon existia una
antipatia bien manifiesta y cuando oyó aquél que éste exigia ser
nombrado Almirante y Virrey de las tierras que descubriese, asi
como la décima parte de los productos, no pudo contenerse y
exclamó: que no era mal arreglo el asegurar dignidades y riquezas
sin exponerse á pérdidas. A esto contestó Colon que se comprometia
á cargar con la octava parte del costo de la expedicion, obteniendo
la octava parte de los beneficios.
La Reyna que en este negocio era siempre de la opinion de su
confesor, no se opuso al dictámen otra vez adverso á Colon, y este,
ya en el año de 1492, partióse de la nueva ciudad de Santa-Fé para
dirigirse á Francia como ya lo habia ántes pensado.
Tenía proposiciones ventajosas del Rey de Francia y por esta razon
no cedia de sus pretensiones; esto estaba previsto por él, como lo
hemos dicho ántes, esto es: si sus ofertas eran acogidas por dos
soberanos, aceptaría la mejor proposicion. No hay duda que prefería
servir á la España porqué en ella tenía ya vínculos y afecciones, pero
no eran tan poderosas que le impidiesen ir á buscar mejores
condiciones.
En cuanto á la Reyna había confiado á su Consejo la negociacion y
sus consejeros le hacían creer que Colon cedería al fin y aceptaría ir
al descubrimiento sin pedir honores y cuotas de ganancias. Pero
viendo la Reyna que se marchaba en verdad, envió á detenerlo por
segunda vez porque no quería de manera alguna, que fuese la
Francia la que tuviese la gloria de una empresa que aunque no la
reputase tan colosal como resultó, creia sin embargo fuese de gran
importancia. Así pues todo lo relativo á nobles trasportes de parte de
Isabel y á la resolucion de vender sus alhajas si faltasen fondos para
la expedicion, no es sinó fábula inventada para engrandecer á la
Reyna, y hacer mas decoroso este período de la historia.
Los fondos de la expedicion se sacaron del tesoro público de Aragon
y del particular de Don Fernando.
Aceptado en definitiva lo que exigia Colon, firmóse el convenio en la
ciudad de Santa-Fé, en la Vega de Granada en 17 de Abril de 1492.
Si no fué la Francia la iniciadora del descubrimiento de América es
debido á dos nobles sentimientos que detuvieron á Colon, el amor á
Doña Beatriz y la amistad de Fray Juan Perez de Marchena, sin lo
cual no hubiera regresado á Córdoba á reanudar sus negociaciones.
Sin que desconozcamos la grandeza del Pueblo Español, no hay
duda que la Francia pudo llevar en el descubrimiento y poblacion de
la América, elementos sociales mas constitutivos que los que llevó
aquel Pueblo que se hallaba en esa época, en condiciones nada
aparentes para la colonizacion y en el cual era constitucional la
anarquía y arraigado estaba el fanatismo. Tampoco hubiéranse
reproducido en las nuevas colonias de la América del Sur el odio
entre Portugueses y Castellanos y las cuestiones de límites y de
predominio, hubiéranse resuelto con otro espíritu, y otras
consideraciones.
CAPITULO V.
Aprestos para la marcha—¡Á que poco costo adquiría la España
un mundo!—Partida de la expedicion—Derrotero—
Descubrimiento—Asombrosos errores—Desviacion de la brújula
—Verdadero descubrimiento de Colon.
Señalóse el puerto de Palos para armarse y partir la expedicion que
debía lanzarse al Océano á realizar los ensueños de Colon.
Dictáronse todas las providencias tendentes á facilitar la partida, y
aprovechándose la obligacion en que estaban los habitantes de ese
puerto de facilitar como tributo embarcaciones y gentes de mar al
Estado, ordenóse el secuestro de dos embarcaciones y su
correspondiente tripulacion. Los gastos de la Corona pues, debian
ser bien insignificantes, reduciéndose á la compra de víveres y pago
de cuatro meses adelantados á los tripulantes. ¡Á tan poco costo iba
la España á adquirir un Nuevo Mundo!
El armamento del tercer buque corria por cuenta de Colon y segun
afirman casi todos los historiadores, sin que sepamos la fuente de
donde han sacado esto, Martin Alonso Pinzon, rico armador del
mismo puerto de Palos, facilitó los fondos necesarios para tal objeto,
resolviéndose él y su hermano á acompañarle en el viaje, tomando
el mando de los buques que debian seguir al Almirante, nombre con
el cual se designó desde entónces á Colon. De los tres buques
aprestados, solo el que montaba este: la Santa Maria tenia cubierta;
los otros dos: la Pinta, mandada por Martin Alonso Pinzon y la Niña
por Vicente Yanez Pinzon eran carabelas, no ascendiendo todo el
personal de la escuadrilla sino á ciento veinte hombres, reclutados
por cierto, con indecible trabajo.
El viérnes 3 de Agosto de 1492, antes de la salida del Sol, zarparon
los buques que debian navegar al rumbo que Colon indicase, con la
condicion de no tocar en las islas Azores, de Cabo Verde, costa de
Guinea ó cualquier otra colonia portuguesa.
Desde el primer dia de la navegacion el Almirante abrió un diario
para llevar cuenta de las ocurrencias de ella, de modo que esta parte
de la historia tiene fuente segura. En la introduccion de ese diario
hallamos de notable que llamase á los Reyes Católicos Reyes de
España y de las islas del Mar.—¿De que islas queria hablar?—La
Antilla segun la creencia de la época estaba poblada: Cipango y
demas islas imaginadas eran dependencias de la India y era de
suponer que ese gran Kan, emperador poderoso, no había de estar
muy dispuesto á ceder sus dominios á un puñado de aventureros.
Tal vez Colon adivinaba la existencia de algunas tierras inhabitadas ó
las suponía tan solo para excitar la codicia de los reyes; pero si se
recuerda el empeño con que exigió ser nombrado Gobernador de
dichas tierras, es forzoso admitir la primera de esas hipótesis. Sin
embargo poca importancia acordaba á dichas tierras pues decía que
el objeto principal de su viaje era llevar una embajada á aquel
poderoso monarca de la India y tratar de la conversion de los
infieles. En corroboracion de lo dicho, veremos como, al llegar al
término de su viaje buscaba mas á aquel Monarca que las tierras
incógnitas.
Dejando á un lado estas dudas sigamos la narracion de su viaje.
Llegada la escuadra á las Canarias, reparadas las averías de uno de
los buques, corregidos los defectos de la arboladura de otro, hecha
abundante provision, zarpó de la Gomera el dia 6 de Septiembre con
rumbo al Sud y no al Poniente como algunos dicen.
Dejemos á un lado las minuciosidades de este viaje y fijemos
nuestra atencion en su derrotero y escalas para convencernos que la
conducta, las disposiciones y los conceptos de Colon se ajustaban á
la carta geográfica que le trasmitió Toscanelli y al sistema de
longitudes que este gran hombre había, bajo la fé de Marco Polo,
monstruosamente alterado. De la Gomera navegó Colon casi
derecho al Sud y acercándose al Trópico de Cancer, dobló de
improviso al Occidente, es decir: al rumbo hácia el cual nadie había
navegado y conservó la misma direccion hasta que no le indujo á
cambiarla el indicio de una tierra cercana.
Con esto Colon trataba de alcanzar el paralelo que le había
designado Toscanelli. Allí creía hallar despues de dos meses mas ó
ménos de navegacion como le decía aquel en la segunda de sus
cartas, ó la tierra incógnita de Tolomeo ó algunos de aquellos
lugares, en la parte de la India, donde podría refugiarse en algun
contra-tiempo imprevisto y en verdad resultó que despues de treinta
y siete dias de viaje solo le faltaban cincuenta y cinco grados para
completar los ciento veinte grados determinados en aquella carta. La
provision de víveres que hizo, segun dice Gonzalo de Oviedo, era
suficiente solo para ese tiempo.
El nombre de India que Colon dió á la América y la pretension que
las islas eran del mar Indiano, fué consecuencia de la promesa que
le hizo Toscanelli de conducirlo directamente al Asia, á los lugares
fertilísimos de toda clase de especería y piedras preciosas; por
cuanto todo el que navegase al Poniente siempre encontraría esos
lugares al Poniente. Así tambien el nombre Cubanacan pronunciado
por los habitantes de Cuba, le hicieron creer que se hallaba en los
dominios del gran Kan y la palabra Cibao repetida por los de la
Española le hicieron tambien creer que había llegado á Cipango.
Había dado Colon órden de conservar siempre rumbo al Occidente y
de navegar hasta setecientas leguas, deteteniéndose en esa
distancia pues á tal altura debia hallar tierra. De Europa á la Antilla,
como lo hemos dicho, resultaban del cálculo de Toscanelli, dos mil
cuatro cientos setenta y cinco millas que hacen algo menos de las
setecientas leguas expresadas, luego pues la tierra que creía Colon
hallar en esas inmediaciones era la Antilla de Toscanelli.
El viérnes 12 de Octubre de 1492 descubrióse por la tripulacion de la
escuadra la tierra Americana. Era esta tierra la isla llamada por los
naturales Guanahami y por Colon, San Salvador.
Aquí se nos presenta en toda su grandeza el error de Toscanelli, la
temeridad de Colon y el peligro en que estuvo su flota.
Sin las varias islas de la América que pusieron término á su viaje
precisamente á la altura en que se le prometia la India, su pérdida
hubiese sido segura. En el paralelo que navegó no habría visto tierra
sinó cerca de la China y esta, situada por Toscanelli á ciento veinte
grados de Lisboa, distaba en verdad doscientos treinta grados. Así
pues, aun suponiendo que los vientos y el mar le hubiesen sido
propicios en un trayecto tan largo.—¿Donde hubiera podido
proveerse y como subsistir por mas de dos meses, con falta absoluta
de víveres?—Cuando se considera que Colon se engañó por ciento
diez grados asombra tanto riesgo y que errores tan enormes hayan
sido coronados de los mas felices sucesos.
En vano se ha dicho en disculpa de Toscanelli que sospechaba la
existencia de un continente intermedio, ó al menos de una vasta isla
entre la Europa y el Asia.
Pero de tal sospecha no se observa vestigio alguno en sus cartas,
escluyendo por otra parte esta hipótesis, su única y absoluta
longitud de ciento veinte grados. Ciertamente lo estravió la aparente
simetria de su nuevo sistema; asi se comprende que despues de
haber, con el testimonio de Polo, agregado cerca de ciento diez
grados de longitud á la parte conocida de la tierra, debia llegar
necesariamente á disminuir la misma longitud á la parte desconocida
del Océano.
En este viaje habia sido Colon muy feliz; los vientos aliseos llevaron
sus bajeles por un mar bonancible con deliciosa rapidez. Pero un
fenómeno desconocido hasta entonces debía presentarse y dejar
perplejo al Almirante. Como no era conocida la desviacion de la
brújula ni se creia en otro Norte que en el Norte del Mundo, sin
pensarse en la atraccion magnética que debia hacerse sentir al
separarse de los paralelos septentrionales, el fenómeno tenía que
ser alarmante é inesperado.
Los pilotos que iban en la expedicion ocurrieron al Almirante
sobresaltados para que este les explicase la causa de lo que
observaban. Hallábase él tan ignorante á este respecto como ellos,
pero por no desconsolarlos les dió una explicacion sofística, como
hizo Galileo la primer vez que fué consultado respecto á la presion
atmosférica sobre la columna de agua.
No está el mérito de Colon en haber descubierto la América, pues
jamas pensó él ni sus contemporáneos en la existencia de un nuevo
continente.
Las tierras incognitas se suponian agregaciones del continente
Asiático y nada nuevo se creía descubrir. Pisando ya la tierra
Americana, hacía esfuerzos por reducirla á las informaciones de
Marco Polo.
El mérito de Colon está en haberse puesto denodadamente al
servicio de la ciencia tal cual se hallaba en aquellos tiempos, en
haber aceptado de los sabios una teoría científica y en haberse
lanzado á practicarla sin arredrarse ante la necesidad de surcar
mares desconocidos y de alejarse de la tierra como nadie se habia
alejado. Colon mas que la América ha descubierto el Océano; reveló
el misterio de su camino y los mil viajeros que tras él se lanzaron y
descubrieron mas tierra que él, no tienen tanto mérito, porque él
abrió los horizontes que se creían impenetrables.
CAPITULO VI.
Divagacion por el archipiélago de las Antillas—Pérdida de la
nave principal—Desercion de la Pinta—Viaje de regreso—
Escala en Portugal—Felonía de Pinzon—Coincidencias favorables
para la España—Célebres doctrinas respecto á las tierras de
infieles—Bula de demarcacion—Triunfos de la diplomacía
portuguesa.
Hallábase Colon entre el Archipiélago descubierto lleno de
admiracion al ver tan lujosa naturaleza. Los bosques, las praderas,
los rios, los lagos, la infinita variedad de las aves, las faldas de las
montañas, la suave ondulacion de las llanuras, todo brillaba con los
rayos de un sol esplendoroso y la vejetacion exhalaba el perfume
mas embriagador.
Pero al mismo tiempo hallábase indeciso; descendia en una Isla y
tornaba á las naves para visitar otra y al mismo tiempo iba
designándolas con los nombres de Isabella, Española, Concepcion
etc., lo que prueba que apesar de no abandonar sus creencias de
hallarse en las proximidades del Asia, reconocia que aquellas islas no
eran las señaladas en la carta de Toscanelli y en la que él mismo
dibujó para guia de su viaje.
Entretanto que asi vagaba Colon por el ancho piélago de las Antillas,
dos contratiempos le sobrevinieron; uno fué la desercion de la Pinta
á causa de querer su comandante Pinzon adelantar por su cuenta los
descubrimientos y recoger las codiciadas riquezas. Otro de los
contratiempos y el mas irreparable fué la pérdida de la Santa María,
arrastrada por una corriente y encallada violentamente en un banco.
Fueron inútiles los esfuerzos que se hicieron para salvarla quedando
la escuadrilla privada del mejor buque.
No decayó por esto el ánimo de Colon y aprovechó el tiempo en
tomar informaciones de aquellos pacíficos y nobles habitantes de las
islas para quienes habia llegado la época de la esclavitud y del
martirio. Todos estaban contestes en señalar al Sud la existencia de
un vasto y poderoso Imperio á cuyo Soberano obedecian millones de
subditos y que poseia inmensas riquezas.
Es indudable que estos indios aludian al Imperio Mejicano, pero
Colon entendía que tal Soberano debia ser el Gran Kan y el Imperio,
el Oriente.
Mas veíase en malas condiciones para proseguir el descubrimiento,
reducido á una sola carabela y rodeado de gente rebelde y mal
dispuesta.
Resolvióse por tanto regresar á España, dejando en la Española, Isla
en la cual se hallaba el mas simpático de los caciques indios,
llamado Guacanajari, un fuerte construido con los despojos de la
Santa María y una guarnicion de treinta hombres.
Construyóse el fuerte cerca á la ensenada que llamó de la Navidad,
así como el fuerte mismo, primer ensayo de colonizacion que tan
desgraciados frutos debía producir, dándose fé desde entónces de
que el pueblo que descubría y poblaba la América era el que en
peores condiciones se hallaba para hacerlo.
En cuatro de Enero del año siguiente al descubrimiento, esto es de
1493, diose Colon á la vela sin esperar á la Pinta que creía ya
perdida; un fuerte viento le hizo derribar hácia el promontorio y
ensenada que llamó de Monte-Cristi. A poco de hallarse en este
refugio avistó á la Pinta que venía buscando el mismo puerto.
Pinzon defendió su rebeldia con pueriles excusas y aceptándolas
Colon, tuvo la primera debilidad que había de serle tan funesta á él y
á las colonias. Pensó que castigar al rebelde sería provocar á sus
adictos y hacer tal vez imposible su regreso á España; mas de este
modo quedó quebrada su autoridad y dispuestos al mal los
elementos anárquicos con que contaba para sus futúras
expediciones. Así pues, apesar de la llegada de la Pinta, persistió
Colon en su designio de regresar á España.
El 9 de Enero se dieron los buques á la vela dejando su refugio y
poniendo rumbo al Oriente. Este viaje de regreso fué tan borrascoso
como bonancible había sido el de venida. Colon creia perecer y que
las noticias de su descubrimiento perecerían con él; en prevision de
tan triste suceso, escribió sucinta ralacion de su viaje y con las
precauciones del caso, la colocó en un tonel que abandonó á las olas
y otro ejemplar hizo colocar en el castillo de popa de su buque.
La Pinta se habia separado y otra vez se creyó perdida, no ya por la
rebeldia de su comandante, sinó por el furor de la tempestad.
En fin el 15 de Febrero se avistó tierra. Era la Isla Santa María, la
mas meridional de las Azores pero á causa del temporal, no pudo la
Niña dar fondo hasta el 17.
Los Portugueses recibieron mal á Colon y á sus subalternos, al
extremo de quererse apoderar del buque y aprisionar á estos. Esta
hostilidad se atribuye por algunos á que el Rey de Portugal, en la
creencia de que la expedicion de los Castellanos menoscababa sus
descubrimientos, habia dado órdenes á los gobernadores de sus
posesiones, que tratasen de apoderarse de los viajeros, pero la
conducta que posteriormente observó el mismo Monarca, desmiente
esta suposicion.
El 24 de Febrero prosiguió Colon su marcha y no sin nuevos
temporales y peligros consiguió el 3 de Marzo dar fondo en Rastello,
cerca de la desembocadura del Tajo. De este punto escribió á los
Reyes de España anunciándoles su llegada y pidió permiso á el de
Portugal para llegar á Lisboa, por no ser el punto en que se hallaba
seguro fondeadero.
Hallábase á la sazon don Juan II con su Corte en Valparaiso á nueve
leguas de Lisboa y aunque los descubrimientos de Colon le debieron
causar sumo despecho por no haberlos él aprovechado, mostróse
con altura y ordenó fuese aquel socorrido de todo cuanto necesitase.
El cronista portugues Rui de Pina refiere que no faltaron consejeros
que incitasen al Rey á ordenar la muerte de Colon para apoderarse
de su secreto; no basta el testimonio del cronista para creerlo, pero
fuese de esto lo que fuese, trató el Rey al Almirante con distinguida
consideracion. Cabía en el ánimo del Monarca una sospecha y era si
el descubrimiento afectaba sus posesiones Africanas, pero Colon
explicó claramente que las tierras visitadas por él estaban fuera de
todo lo conocido hasta la fecha y en rumbo distinto á el de los
descubrimientos de los portugueses.
Despues de esto se partió el Almirante para España, llegando al
puerto de Palos el 15 de Marzo á medio dia. No bien habia fondeado
la Niña cuando apareció la Pinta que habia sido arrojada á la costa
de Cantábria y desde allí habia Pinzon escrito á los Reyes que Colon
habia naufragado y que á él se debia el descubrimiento. Habia pues
cometido dos injustificables felonías; su rebelion en el archipiélago
de las Antillas y su impostura al llegar á España, faltas que si no se
disculpan se atenúan por haber auxiliado á Colon al principio de su
empresa y porque su arrepentimiento fué tal, que murió de
pesadumbre.
Encontrábanse los Reyes Católicos en Barcelona donde Don
Fernando habia salvado de una tentativa de asesinato; acababa de
firmarse el tratado de paz con la Francia en que esta cedia los
condados de Rosellon y Cerdeña. Coincidia este triunfo diplomático
con la conquista definitiva de las Canarias empezada por Betacourt y
concluida ahora por Alfonso Fernandez de Lugo.
Por último habia fallecido el marques de Cádiz y como no habia
dejado sucesion, quedó la Ciudad y Puerto definitivamente anexados
á la Corona. Á completar tal número de felices coincidencias llegaba
pues Colon con las nuevas de su descubrimiento, cuya grandeza no
era aun ni sospechada.
Desde el puerto de Palos hasta Barcelona hay un trayecto regular,
debiendo atravesarse por pueblos y ciudades; ese trayecto fué una
marcha triunfal para Colon, que iba á caballo y precedido de las
muestras de los productos, de los animales y de los indios que
habian sido llevados de las tierras descubiertas. Todas las
poblaciones salian á victoriar á aquel viajero afortunado que no
hacia mucho se habia presentado como un mendigo. Ignoraba
Colon, entónces en el apogeo de su gloria, cuantas amarguras tenia
que sufrir y como habia de eclipsarse el brillo de su estrella.
Los Reyes recibieron cariñosamente á Colon y oyeron de sus labios
la relacion de sus viajes con interes y aun con entusiasmo y
agregaron á sus privilegios otras mercedes, entre ellas que pudiera
llevar escudo con el símbolo del descubrimiento y la inscripcion
siguiente:
Para Castilla y Leon
Nuevo mundo halló Colon.
Al mismo tiempo pensaron los Monarcas asegurar para su dominio
los nuevos paises descubiertos. El Derecho de Gentes en aquel
tiempo ni estaba muy adelantado ni se consultaba siempre. Entre los
medios de adquirir las tradiciones romanas no ofrecian sino la
conquista; pero el advenimiento de los Papas y su jurisdiccion
espiritual sobre todos los reyes católicos trajo otra doctrina bien
original:—Segun ella los infieles no tenian derecho á poseer
dominios y cualquier principe cristiano podia desapoderar de sus
tierras y sustituir á todo principe hereje. La propiedad del mundo era
para los católicos, quienes podian reivindicar toda tierra de infieles y
en su virtud el Papa podia distribuir las tierras como árbitro. Asi fué
que Martin V y sus sucesores concedieron á la Corona de Portugal
todas las tierras que se descubriesen por sus subditos desde el cabo
Boyador á las Indias y los Reyes Católicos por un tratado celebrado
con el Monarca Portugues en 1479, habíanse comprometido á
respetar esos derechos.
Ocupaba entonces el trono de San Pedro el crapuloso Borgia con el
nombre de Alejandro VI. Fácil fué convencer á este de que los
descubrimientos de los Castellanos tenian otro rumbo que los que
habian sido asegurados á los portugueses y al fin, en Marzo de
1493, expidió una bula concediendo á la Corona de España para sus
descubrimientos, las mismas seguridades que habian sido
concedidas á Portugal.
Agregóse á esta bula la célebre demarcacion por la cual se
adjudicaba á la España omnes insulas et terras firmes, inventas et
inveniendas, detectas et detejendas versum occidentem et
meridiem. Hacíase esta demarcacion por una línea imaginaria que
desde el polo Ártico bajase al Antártico, cien leguas al Occidente de
las Azores y de las islas de Cabo Verde.
Entretanto preparábase una segunda expedicion á las tierras
descubiertas; pero los portugueses á pesar de la célebre
demarcacion, estaban recelosos de ella. Empezó entónces una lucha
de astucia y de intrigas en que se empleaban el cohecho y los mas
viles recursos para descubrir los secretos de este negocio. La
corruptora diplomacía portuguesa que debia tener digna sucesion en
América, salió triunfante en este caso, con el célebre tratado de
Tordesillas, celebrado en 7 de Junio de 1494, por el cual la línea
divisoria se modificó, debiendo tirarse tres cientas leguas al
Occidente.
No se explica esta concesion que debia ser funesta á la América
Española. Con esta modificacion los portugueses mas tarde alegaron
derechos para ocupar el Brasil y enseñorearse de una de las mas
importantes regiones de la América.

More Related Content

PDF
Advances In Grid Computing Zoran Constantinescu
PDF
Complexity in Future Networks (A. Manzalini)
PDF
Distributed Computing And Artificial Intelligence 19th International Conferen...
PDF
Qos In Wireless Sensoractuator Networks And Systems Mario Alves
PDF
Advances In Information Communication And Cybersecurity Proceedings Of Ici2c2...
PDF
Privacy Preserving in Edge Computing Wireless Networks Longxiang Gao Tom H Lu...
PDF
Cutting Edge Research In New Technologies Edited By Constantin Volosencu
PDF
Design Automation For Fieldcoupled Nanotechnologies Marcel Walter
Advances In Grid Computing Zoran Constantinescu
Complexity in Future Networks (A. Manzalini)
Distributed Computing And Artificial Intelligence 19th International Conferen...
Qos In Wireless Sensoractuator Networks And Systems Mario Alves
Advances In Information Communication And Cybersecurity Proceedings Of Ici2c2...
Privacy Preserving in Edge Computing Wireless Networks Longxiang Gao Tom H Lu...
Cutting Edge Research In New Technologies Edited By Constantin Volosencu
Design Automation For Fieldcoupled Nanotechnologies Marcel Walter

Similar to Device-Edge-Cloud Continuum: Paradigms, Architectures and Applications 1st Edition Claudio Savaglio (20)

PDF
Connectivity Frameworks for Smart Devices The Internet of Things from a Distr...
PDF
Internet Of Things Novel Advances And Envisioned Applications D P Acharjya
PDF
Connectivity Frameworks for Smart Devices The Internet of Things from a Distr...
PPTX
Computer generation presentation by zohaib akram
PDF
Blockchain and Applications 2nd International Congress Javier Prieto
PDF
Serviceoriented And Cloud Computing 9th Ifip Wg 612 European Conference Esocc...
PPTX
IOTCYBER
PDF
Game Theory Solutions For The Internet Of Things Sungwook Kim
PDF
Intelligent Systems In Digital Transformation Theory And Applications Cengiz ...
PDF
Supply Network Dynamics And Control Alexandre Dolgui Dmitry Ivanov
PDF
Information Centric Networks ICN Architecture Current Trends Dutta Nitul
PDF
Advances in Body Area Networks I Post Conference Proceedings of BodyNets 2017...
PDF
Information Centric Networks ICN Architecture Current Trends Dutta Nitul
PDF
Computer Vision Systems 13th International Conference Icvs 2021 Virtual Event...
PDF
Advances In Networkedbased Information Systems The 23rd International Confere...
PDF
Mary Barnsdale article about Fog Computing for Cisco
PDF
Interoperability Of Heterogeneous Iot Platforms A Layered Approach 1st Editio...
PDF
Quantum Machine Learning Quantum Algorithms And Neural Networks Pethuru Raj
PDF
Advances In Engineering And Information Science Toward Smart City And Beyond ...
PDF
Internet Of Things For Smart Environments Gonalo Marques Alfonso Gonzlezbriones
Connectivity Frameworks for Smart Devices The Internet of Things from a Distr...
Internet Of Things Novel Advances And Envisioned Applications D P Acharjya
Connectivity Frameworks for Smart Devices The Internet of Things from a Distr...
Computer generation presentation by zohaib akram
Blockchain and Applications 2nd International Congress Javier Prieto
Serviceoriented And Cloud Computing 9th Ifip Wg 612 European Conference Esocc...
IOTCYBER
Game Theory Solutions For The Internet Of Things Sungwook Kim
Intelligent Systems In Digital Transformation Theory And Applications Cengiz ...
Supply Network Dynamics And Control Alexandre Dolgui Dmitry Ivanov
Information Centric Networks ICN Architecture Current Trends Dutta Nitul
Advances in Body Area Networks I Post Conference Proceedings of BodyNets 2017...
Information Centric Networks ICN Architecture Current Trends Dutta Nitul
Computer Vision Systems 13th International Conference Icvs 2021 Virtual Event...
Advances In Networkedbased Information Systems The 23rd International Confere...
Mary Barnsdale article about Fog Computing for Cisco
Interoperability Of Heterogeneous Iot Platforms A Layered Approach 1st Editio...
Quantum Machine Learning Quantum Algorithms And Neural Networks Pethuru Raj
Advances In Engineering And Information Science Toward Smart City And Beyond ...
Internet Of Things For Smart Environments Gonalo Marques Alfonso Gonzlezbriones
Ad

Recently uploaded (20)

PDF
Journal of Dental Science - UDMY (2021).pdf
DOCX
Cambridge-Practice-Tests-for-IELTS-12.docx
PDF
BP 505 T. PHARMACEUTICAL JURISPRUDENCE (UNIT 2).pdf
PDF
Skin Care and Cosmetic Ingredients Dictionary ( PDFDrive ).pdf
PDF
FOISHS ANNUAL IMPLEMENTATION PLAN 2025.pdf
PPTX
ELIAS-SEZIURE AND EPilepsy semmioan session.pptx
PDF
MBA _Common_ 2nd year Syllabus _2021-22_.pdf
PDF
Literature_Review_methods_ BRACU_MKT426 course material
PDF
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 2).pdf
PDF
CISA (Certified Information Systems Auditor) Domain-Wise Summary.pdf
PDF
International_Financial_Reporting_Standa.pdf
PDF
BP 505 T. PHARMACEUTICAL JURISPRUDENCE (UNIT 1).pdf
PDF
Journal of Dental Science - UDMY (2022).pdf
PDF
David L Page_DCI Research Study Journey_how Methodology can inform one's prac...
PDF
Hazard Identification & Risk Assessment .pdf
PDF
Empowerment Technology for Senior High School Guide
PDF
English Textual Question & Ans (12th Class).pdf
PDF
1.3 FINAL REVISED K-10 PE and Health CG 2023 Grades 4-10 (1).pdf
PPTX
Share_Module_2_Power_conflict_and_negotiation.pptx
PPTX
Module on health assessment of CHN. pptx
Journal of Dental Science - UDMY (2021).pdf
Cambridge-Practice-Tests-for-IELTS-12.docx
BP 505 T. PHARMACEUTICAL JURISPRUDENCE (UNIT 2).pdf
Skin Care and Cosmetic Ingredients Dictionary ( PDFDrive ).pdf
FOISHS ANNUAL IMPLEMENTATION PLAN 2025.pdf
ELIAS-SEZIURE AND EPilepsy semmioan session.pptx
MBA _Common_ 2nd year Syllabus _2021-22_.pdf
Literature_Review_methods_ BRACU_MKT426 course material
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 2).pdf
CISA (Certified Information Systems Auditor) Domain-Wise Summary.pdf
International_Financial_Reporting_Standa.pdf
BP 505 T. PHARMACEUTICAL JURISPRUDENCE (UNIT 1).pdf
Journal of Dental Science - UDMY (2022).pdf
David L Page_DCI Research Study Journey_how Methodology can inform one's prac...
Hazard Identification & Risk Assessment .pdf
Empowerment Technology for Senior High School Guide
English Textual Question & Ans (12th Class).pdf
1.3 FINAL REVISED K-10 PE and Health CG 2023 Grades 4-10 (1).pdf
Share_Module_2_Power_conflict_and_negotiation.pptx
Module on health assessment of CHN. pptx
Ad

Device-Edge-Cloud Continuum: Paradigms, Architectures and Applications 1st Edition Claudio Savaglio

  • 1. Device-Edge-Cloud Continuum: Paradigms, Architectures and Applications 1st Edition Claudio Savaglio install download https://ebookmeta.com/product/device-edge-cloud-continuum- paradigms-architectures-and-applications-1st-edition-claudio- savaglio/ Download more ebook from https://ebookmeta.com
  • 2. We believe these products will be a great fit for you. Click the link to download now, or visit ebookmeta.com to discover even more! Applications of Tensor Analysis in Continuum Mechanics 1st Edition Victor A Eremeyev Michael J Cloud And Leonid P Lebedev https://ebookmeta.com/product/applications-of-tensor-analysis-in- continuum-mechanics-1st-edition-victor-a-eremeyev-michael-j- cloud-and-leonid-p-lebedev/ Modern Semiconductor Physics and Device Applications 1st Edition Vitalii Dugaev https://ebookmeta.com/product/modern-semiconductor-physics-and- device-applications-1st-edition-vitalii-dugaev/ Network Management in Cloud and Edge Computing Yuchao Zhang https://ebookmeta.com/product/network-management-in-cloud-and- edge-computing-yuchao-zhang/ The Future of the Artificial Mind 1st Edition Alessio Plebe https://ebookmeta.com/product/the-future-of-the-artificial- mind-1st-edition-alessio-plebe/
  • 3. On Board Processing for Satellite Remote Sensing Images 1st Edition Guoqing Zhou https://ebookmeta.com/product/on-board-processing-for-satellite- remote-sensing-images-1st-edition-guoqing-zhou/ A Crab in the Cab Marv Alinas https://ebookmeta.com/product/a-crab-in-the-cab-marv-alinas/ How to Start Your Own Cybersecurity Consulting Business: First-Hand Lessons from a Burned-Out Ex-CISO 1st Edition Ravi Das https://ebookmeta.com/product/how-to-start-your-own- cybersecurity-consulting-business-first-hand-lessons-from-a- burned-out-ex-ciso-1st-edition-ravi-das/ Top STEM Careers in Engineering 1st Edition Gina Hagler https://ebookmeta.com/product/top-stem-careers-in- engineering-1st-edition-gina-hagler/ Screening the Paris Suburbs From the Silent Era to The 1990s 1st Edition Philippe Met https://ebookmeta.com/product/screening-the-paris-suburbs-from- the-silent-era-to-the-1990s-1st-edition-philippe-met/
  • 4. Theodor W Adorno s Philosophy Society and Aesthetics 1st Edition Stefano Petrucciani https://ebookmeta.com/product/theodor-w-adorno-s-philosophy- society-and-aesthetics-1st-edition-stefano-petrucciani/
  • 5. Internet ofThings Claudio Savaglio Giancarlo Fortino MengChu Zhou Jianhua Ma Editors Device-Edge- Cloud Continuum Paradigms, Architectures and Applications
  • 6. Internet of Things Technology, Communications and Computing Series Editors Giancarlo Fortino, Rende (CS), Italy Antonio Liotta, Edinburgh Napier University, School of Computing, Edinburgh, UK
  • 7. The series Internet of Things - Technologies, Communications and Computing publishes new developments and advances in the various areas of the different facets of the Internet of Things. The intent is to cover technology (smart devices, wireless sensors, systems), communications (networks and protocols) and computing (the- ory, middleware and applications) of the Internet of Things, as embedded in the fields of engineering, computer science, life sciences, as well as the methodologies behind them. The series contains monographs, lecture notes and edited volumes in the Internet of Things research and development area, spanning the areas of wireless sensor networks, autonomic networking, network protocol, agent-based computing, artificial intelligence, self organizing systems, multi-sensor data fusion, smart objects, and hybrid intelligent systems. Indexing: Internet of Things is covered by Scopus and Ei-Compendex **
  • 8. Claudio Savaglio • Giancarlo Fortino • MengChu Zhou • Jianhua Ma Editors Device-Edge-Cloud Continuum Paradigms, Architectures and Applications
  • 9. Editors Claudio Savaglio DIMES Università della Calabria Rende, Cosenza, Italy MengChu Zhou New Jersey Institute of Technology Newark, NJ, USA Giancarlo Fortino DIMES Universita della Calabria Rende, Cosenza, Italy Jianhua Ma Hosei University Tokyo, Japan ISSN 2199-1073 ISSN 2199-1081 (electronic) Internet of Things ISBN 978-3-031-42193-8 ISBN 978-3-031-42194-5 (eBook) https://doi.org/10.1007/978-3-031-42194-5 © The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors, and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Paper in this product is recyclable.
  • 10. Contents Towards the Edge-Cloud Continuum Through the Serverless Workflows 1 Christian Sicari, Alessio Catalfamo, Lorenzo Carnevale, Antonino Galletta, Antonio Celesti, Maria Fazio, and Massimo Villari Firmware Dynamic Analysis Through Rewriting ............................ 19 Claudia Greco, Michele Ianni, Antonella Guzzo, and Giancarlo Fortino Performance Analysis of a Blockchain for a Traceability System Based on the IoT Sensor Units Along the Agri-Food Supply Chain........ 35 Maria Teresa Gaudio, Sudip Chakraborty, and Stefano Curcio The Role of Federated Learning in Processing Cancer Patients’ Data..... 49 Mihailo Ilić, Mirjana Ivanović, Dušan Jakovetić, Vladimir Kurbalija, Marko Otlokan, Miloš Savić, and Nataša Vujnović-Sedlar Scheduling Offloading Decisions for Heterogeneous Drones on Shared Edge Resources.......................................................... 69 Giorgos Polychronis and Spyros Lalis Multi-objective Optimization Approach to High-Performance Cloudlet Deployment and Task Offloading in Mobile-edge Computing ....................................................................... 89 Xiaojian Zhu and MengChu Zhou Towards Secure TinyML on a Standardized AI Architecture .............. 121 Muhammad Yasir Shabir, Gianluca Torta, Andrea Basso, and Ferruccio Damiani Deep Learning Meets Smart Agriculture: Using LSTM Networks to Handle Anomalous and Missing Sensor Data in the Compute Continuum ........................................................................ 141 Riccardo Cantini, Fabrizio Marozzo, and Alessio Orsino v
  • 11. vi Contents Evaluating the Performance of a Multimodal Speaker Tracking System at the Edge-to-Cloud Continuum ..................................... 155 Alessio Orsino, Riccardo Cantini, and Fabrizio Marozzo A Deep Reinforcement Learning Strategy for Intelligent Transportation Systems ......................................................... 167 Francesco Giannini, Giuseppe Franzè, Giancarlo Fortino, and Francesco Pupo Compressed Sensing-Based IoMT Applications .............................. 183 Bharat Lal, Qimeng Li, Raffaele Gravina, and Pasquale Corsonello Occupancy Prediction in Buildings: State of the Art and Future Directions ......................................................................... 203 Irfanullah Khan, Emilio Greco, Antonio Guerrieri, and Giandomenico Spezzano Index............................................................................... 231
  • 12. Toward the Edge-Cloud Continuum Through the Serverless Workflows Christian Sicari, Alessio Catalfamo, Lorenzo Carnevale, Antonino Galletta, Antonio Celesti, Maria Fazio, and Massimo Villari 1 Introduction In the last years, we have witnessed the rise of edge computing, a new trend that is in total opposition to cloud computing, aimed to collect and compute data as closely as possible to the data source. Even if edge computing has rapidly gained popularity, cloud has kept the leadership both for heavyweight jobs and data persistence because of the hardness on migration and integration. The gap between edge and cloud has been recently filled with an intermediate layer named fog, which is in charge of redirecting information to cloud and edge. The composition and orchestration of services between the three tiers have given rise to the cloud-edge Continuum (or just Continuum) [26] paradigm. Continuum’s main goal consists of taking advantage of cloud, edge, and eventually even fog, to run applications where they best fit and readopting this, if something change in the environment or in the QoS parameters the applications is trying to satisfy [24, 30, 40]. Deploying software at the continuum is considered challenging for many reasons, such as architecture dependency, host federation, and global resource balancing, [28, 36]. However, the serverless paradigm has been recently introduced with the intent of making these problems surmountable. Serverless (i.e., function-as- a-service model) is a platform-independent approach for deploying and exposing services and relative APIs to final users, without worrying about the underlined system. Function-as-a-service (FaaS) engines are typically based on an orchestrator (i.e., Kubernetes) which is able to manage applications composed of many contain- ers, load balance, and federate resources [24]. Serverless and FaaS paradigms are C. Sicari () · A. Catalfamo · L. Carnevale · A. Galletta · A. Celesti · M. Fazio · M. Villari Department of Mathematics and Computer Sciences, Physical Sciences and Earth Sciences, University of Messina, Messina, Italy e-mail: csicari@unime.it; alecatalfamo@unime.it; lcarnevale@unime.it; angalletta@unime.it; acelesti@unime.it; mfazio@unime.it; mvillari@unime.it © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 C. Savaglio et al. (eds.), Device-Edge-Cloud Continuum, Internet of Things, https://doi.org/10.1007/978-3-031-42194-5_1 1
  • 13. 2 C. Sicari et al. widely used in cloud-only applications, but thanks to their flexibility, some recent works are emerging with the purpose of deploying functions into the edge of the network for lightweight problems [3, 7, 33, 45]. For example, FaaS can be used for isolated and low-decoupled tasks, but it is not ideal for complex and coupled applications due to the impossibility of easily composing and integrating functions [26]. These drawbacks generate issues for continuum environments where, typically, applications are coupled in data-driven workflows with many tasks connected among different computing tiers [2, 20, 36]. In this chapter, we propose (i) new research guidelines for serverless orches- tration in the continuum paradigm and (ii) a reference blueprint for the standard creation of a FaaS-based workflow orchestration. Specifically, we determine princi- ples, definitions, a reference architectural model, and data structures that are useful for defining and orchestrating serverless workflows. Once the baseline is defined, we present a project called OpenWolf [42] as a ready-to-use solution for designing, deploying, and using serverless workflows, composed of many functions spread among the continuum. In order to evaluate the platform, we analyzed a deep learning application for image classification in a smart city scenario, considering five steps: collection, transformation, training, inference, and plotting. The rest of the chapter is organized as follows. Section 2 describes the state of the art behind serverless and workflow, highlighting weaknesses and strengths of existing solutions. In Sect. 3 we describe the building blocks and the glossary term for any serverless-based workflow engine. In Sect. 4 we design a cloud- edge architecture used to manage and run serverless workflows. In Sect. 5 we describe OpenWolf, an open-source project compliant with the reference archi- tecture. In Sect. 6 we describe a machine learning typical workflow using the glossary and the building block of this work; moreover, this workflow is tested using OpenWolf and the performances described below. Finally, in Sect. 7 we summarize the work presented and highlight the next research directions. 2 Background The continuum aims to make a collaboration between the cloud and edge tiers in order to distribute near real-time processing on edge and massive processing on cloud [4]. Continuum faces several challenges related to different topics (i.e., security [41], scheduling) such that actual solutions [12] need to be reengineered to become suitable for the computing continuum. Recently, serverless computing has emerged as a solution for distributing small functions using containers with the intent of reacting to external triggers (i.e., cronjobs, HTTP calls, message queue systems) [16, 46]. This new paradigm was well received by the scientific community, which tries to exploit it for orchestrating functions over the continuum [29, 35] by using different orchestrators, such as Kubernetes [6], Nomad [8, 27], and more [13]. Moreover, FaaS is used in the
  • 14. Towards the Edge-Cloud Continuum Through the Serverless Workflows 3 continuum to make the development, deployment, and automatic balancing easier, thanks to the underlined orchestrators [5, 31, 43]. The combined use of continuum and serverless pointed out the problem of composing functions, which means the capacity of concatenating functions for creating more complex applications. The authors [1] proposed three principles of serverless as (i) black-box functions, (ii) substitution, and (iii) double billing, which attempt to explain that composing FaaS application could be considered an anti- pattern. However, we do not agree with that statement. The term workflow was used as a generic term for describing a well-defined organization of tasks connected in order to transform one or more inputs to a given output. In scientific literature, this term muted to scientific workflows, which is described [22] as a way to deal with data and pipelined computation steps in different application fields (i.e., bioinformatics, cheminformatics, ecoinformatics, geoinformatics, physics), without mastering a computer science background. For example, Kepler is a workflow grid-based, later extended [34] to support distributed computing on grid computing. Almost in parallel, the Pegasus system [9] was proposed to abstract the workflow as an ensemble of independent tasks. Such technology continued to have a progressive evolution, keeping the track of newer ones, such as grid [9, 21], Cloud [10], containers [19], and [44]. Going back to the last five years, workflows gained new popularity because of the increasing use of cloud computing and serverless. Indeed, the latter was widely adopted for designing and implementing workflows [17]. Perez et al. [32] designed a framework for executing Linux-based containers in a FaaS platform (i.e., AWS Lambda). Jiang et al. [17] integrated the scientific workflow into the main FaaS providers in order to exploit the serverless paradigm and make the implementation for end users (i.e., scientists) easier. Skyport [14] was instead a brilliant idea for creating black-box-based workflows, by means of an engine able to compose workflows as soft virtualized software (i.e., Docker containers). Recently, the workflow has become more sophisticated and accurate. It is not either a programming pattern or a software architecture design, but a computational on-premise engine where defining, storing, and deploying a composition of black-box functions [25]. Hyperstream [11] is a domain-specific tool used to deploy machine learning (ML) algorithms that are automatically fired by some incoming streaming data. One step ahead in this direction was moved in [18], where the authors proposed a workflow engine server (WES), which is a back-end engine used to store functions and workflows and run them when triggered by an event. Such an engine introduces workflow modularity and a validation schema, but it lacks integration with external systems and expandability with other functions. One of the most autonomous engines has been instead presented by Lopez et al. [23] with Triggerflow, a trigger-based orchestration of serverless workflows. It lacks a user-friendly workflow editor, a data schema for the functions, and an event global registry. However, Triggerflow has clear strengths, such as a mechanism to fire triggering-based workflow, an asynchronous communication channel, and a serverless model. A different approach for workflows was, instead, presented in [37], where the authors propose R-Pulsar, a cloud-edge engine that is able to trigger functions according to an interesting
  • 15. 4 C. Sicari et al. matching algorithm based on a decoupled associative message (AR) selection already presented in [38]. This helps in matching producers and consumers, as well as taking actions, such as running a function and starting a data production [39]. The abovementioned approaches prove good flexibility mostly when related to ML [15], but the utilization of serverless is still not totally well exploited. 3 Workflow Engine Characteristics and Principles In this section, we put the stakes of the proposed workflow engine architecture, and we define the dictionary of terms that are used in the remainder of this paper, i.e., (i) state, (ii) event, (iii) workflow, and (iv) manifest 3.1 State The main component of the architecture is the state. It mainly encapsulates a function and all the information related to it within a job. It is stateless, which means that the running job is not aware of other jobs interacting with it, and therefore the job behavior cannot change based on previous executions. As shown in Fig. 1, the job is composed of (i) metadata and (ii) a function. The latter is the code that includes the job’s business logic, and it is encapsulated inside a container. The metadata includes four different pieces of information, such as state description, handler instructions, input schema, and output schema. Specifically, they are described as follows: Job description contains the job identifier, name, service description, and service class. They are used to quickly classify the service. BOOT STRAP / HANDLER / METADATA INPUT SCHEMA / OUTPUT SCHEMA / SERVICE DESCR. TXT Function Fig. 1 The state encapsulates a function and all the information related to it within a job
  • 16. Towards the Edge-Cloud Continuum Through the Serverless Workflows 5 Bootstrap instructions are run for instantiating a job inside the workflow engine. These could contain the code to either build an image, set the environment variables, or run a docker container. Handler instructions are run every time a job is triggered. Basically, these validate the input schema, run a function using the passed and parsed parameters, wait for the function result, and finally parse results with a format compliant with the output schema. Input/output schemas contain the schema of the acceptable input and the schema of the provided output. They are essential for creating compatible job chains. Often, workflows also contain the connectors, a special kind of job that simply maps a job’s output to the next jobs’ input, according to their input/output schema. It is created on-premise during the workflow design, and it does not require an input and output predefined schema, since they change according to the workflow where they are located. 3.2 Event An event is the only entity that can be processed in a workflow; it is originally sent from outside the workflow and then processed inside the workflow. All changes applied to an event are separately stored in a data lake, while the last version of the event is propagated through the workflows’ jobs. An event is composed of both immutable and mutable data. The immutable data includes the following: Event ID identifies the event uniquely, and it is managed directly by the workflow engine. Workflow ID is a reference to the workflow which is processing/has processed the event. The mutable data are generally updated by the workflow engine and by the jobs that process the event. This includes the following: Status is a value in the following domain: .Started, Processing, Error, Processed.. Data is the last job’s output. Timestamp represents the date and time in which the last transformation has been completed. 3.3 Workflow The workflow diagram shown in Fig. 2 represents how states interact between each other. The workflow starts when the first node is triggered by an external event, i.e.,
  • 17. 6 C. Sicari et al. Fig. 2 Workflow example the action 1 in Fig. 2, carrying on a data payload. Any event is directly connected to a state (action 2) and therefore to a connector (action 3). Connectors act as conciliators for filtering events with a specific state. The first event is unique, and it is mapped one-to-one to a single workflow execution. This avoids overlaps with other events that follow the same workflow. Naturally, when an event passes through the states, it modifies its data according to the output of the previous state. Within the workflow, any kind of links are allowed, such as many-to-many, many-to-one, and one-to-many. However, they must start and finish with only one job. When a many-to-one relationship (action 5 in Fig. 2) is defined, the triggering condition needs to be explained. In this regard, the condition may follow the Boolean algebra, i.e., using AND for combining two or more events that must be received before firing the next one, or using OR for combining two or more events according to the fact the only one of them is enough for firing the next state. The workflow diagram shown in Fig. 2 is an example of e-commerce scenarios, where customers are notified both by an email and a short message system as soon as a product on which they are interested in is again available. Furthermore, the workflow is triggered by a web notification which says a given product is available again. The workflow fetches the users interested in this item using the state J0, J1, and J2 and then fetches the users’ email and telephone number. Finally, the state J3 is used to notify the users. In this scenario, three connectors are used. Two connectors make compatible the J0’s output with the J1 and J2’s input; the last one maps instead the J1 and J2’s output with the J3’s input. 3.4 Workflow Manifest In order to describe a workflow within a schema, we propose a manifest based on YAML format. The manifest, therefore, translates in processes what was designed, i.e., in Fig. 2.
  • 18. Towards the Edge-Cloud Continuum Through the Serverless Workflows 7 Listing 1 Workflow Manifest Example name: workflow-name callbackUrl: uri-where-to-send-result states: state-id: function: ref: ref-to-function-id config: key: value start: true handlers: handler-id: endpoint: endpoint-to-function config: key: value workflow: state-id: activation: Boolean Equation inputFilter: jq command outputFilter: jq command As shown in the Listing 1, the manifest has (i) a name, (ii) a callback URL where the result is sent, and three more sections, such as (iii) states, (iv) handlers, and (v) workflow. States list and describe all the status of the workflow. For each state, we define a name, handler, and global key-value configuration for the handler. Handlers describe all the handlers called within the states. This attribute deter- mines how to call the handler and the basic configurations that may be overwritten in the states’ parts. The separation of states and functions sections allows using multiple times the same handler in different states. Workflow describes how the states interact. For each state, we determine which previous states have triggered it and how to transform inputs and outputs. This part acts as a connector. 4 Architecture The reference architecture for managing a serverless workflow is shown in Fig. 3. It is a four-layered architecture composed of (i) infrastructure, (ii) federation, (iii) serverless, and (iv) service layers. All layers are described as follows.
  • 19. 8 C. Sicari et al. Fig. 3 Workflow engine architecture The infrastructure layer contains the bare-metal nodes that are part of the con- tinuum environment. Nodes may have different geographical locations, architecture characteristics, and distributions. The federation layer creates a communication interoperability among the nodes of the infrastructure layer. It is composed of an overlay network used to connect nodes with a message-oriented middleware (MOM), with the intent of exchanging data over the overlay itself. The serverless layer provides FaaS features to the underlined layer, i.e., the service layer. It uses a container orchestrator for deploying functions among the federation. It includes a function repository for storing the functions in the system, a compiler to build the same function in all the architecture available and compatible, and a gateway used to trigger the functions. The service layer is, instead, the top layer of the architecture. It adds capability of composition to the serverless layer. The service layer is composed of an event history database (EHD), a workflow repository, and a single agent. The EHD stores a permanent history of event transformation within the engine. Indeed, an event changes its mutable content when it is the input of a job. However, if a workflow is composed of n-jobs, the initial event will have n changes. Thus, the EHD stores all the n changes, along with the initial content. Furthermore, we had to consider a status history array field in the event data structure, as shown in Fig. 4. This approach allows to (i) keep track of the event history, (ii) keep track of the event transformation, (iii) log every change, and (iv) recover any workflow state. The workflow repository stores the manifest files that contain the workflow
  • 20. Towards the Edge-Cloud Continuum Through the Serverless Workflows 9 Fig. 4 Event data model descriptions according to the structure defined in Sect. 3.4. The broker coordinates the service layer and, more in general, the overall infrastructure. It is basically in charge of receiving the external events and intercepting the execution of a function inside a triggered state, recognizing that it uses the proper workflow manifest in the workflow repository, and then updates the EHD for saving the actual data coming from the events or from the states. 5 OpenWolf: Serverless Workflow Engine The architecture shown in Fig. 3 is implemented in an open-source project currently under development, called OpenWolf [42]. The OpenWolf architecture is shown in Fig. 5, and it is composed of four main elements: (i) Kubernetes, (ii) OpenFaaS, (iii) Redis, and (iv) the OpenWolf agent. Kubernetes works between the federation and serverless layers. It is used to federate continuum nodes using an own defined overlay network. Moreover, it also provides the orchestration tools needed by the serverless layer for deploying func- tions among the continuum. OpenFaaS works at the serverless layer as the engine to store, compile, deploy, and manage functions in conjunction with Kubernetes. Redis works inside the service layer, and it acts as EHD and workflow repository. Indeed, it stores the workflow manifests, but it also keeps track of the workflow executions. For the latter, OpenWolf uses a well-defined event structure expressed using a JSON format, in which the main properties are called ctx and data. The first one represents the event context, and it is composed of the workflowID, which references the workflow to which event it belongs; the execID, which distinguishes the different executions of the same workflow; and the state, which references the state that has returned the event. The data property, instead, is the function’s output
  • 21. 10 C. Sicari et al. Fig. 5 Workflow engine architecture itself, and, unlike the ctx that is read and set by the workflow agent, this is fully managed by the function. An event example is proposed in the Listing 2, which is fired by State C in the workflow shown in Fig. 6. Listing 2 Event Data Structure { ctx: { workflowID: inference-traffic, execID: inference-traffic.123, state: C }, data: { AIQ: 47, Scale: EU } }
  • 22. Towards the Edge-Cloud Continuum Through the Serverless Workflows 11 Fig. 6 Example of workflow in data analytics OpenWolf agent acts as a broker for the workflow statuses, as it is used to achieve the function composition feature. OpenWolf ensures that any event will follow the correct path in the workflow and triggers the correct states in the workflow with a proper transformation of the right income event. In this regard, the OpenWolf agent is deployed as a standalone stateless microservice inside the same Kubernetes cluster used to run the serverless functions. The agent exposes two interfaces. The first one is a public interface used to trigger a workflow from the external. The second one is closed inside the Kubernetes cluster, and it is used as a callback URL for each asynchronous function triggered by any workflow. By doing that, the agent intercepts all the events belonging to a workflow, extracts the context information, and uses it for fetching all the workflow and current execution information. Therefore, it triggers the next states in the manifest, forwarding the right received event with the updated ctx property. This process is described more concisely in the activity diagram in Fig. 7. 6 Use Case Smart cities are a typical scenario for a computing continuum use case. For example, in private and public spaces, we could find Internet of Things (IoT) sensors and small computing devices, like cameras and Raspberry Pi for monitoring buildings, traffic, or environment parameters. These data are then typically processed in local data factories provided by private citizens, municipalities, or research institutes, and often they trust on private cloud providers like AWS or Azure, i.e., for long-time storage or processing. As a consequence, it is easy to find the three continuum layers over them. In the following, we will analyze a typical pipe for image processing. Smart cities rely on this kind of algorithm for detecting violent and dangerous situations, traffic rule violation, or roadside surveillance application (Fig. 8).
  • 23. 12 C. Sicari et al. Fig. 7 OpenWolf agent actions
  • 24. Towards the Edge-Cloud Continuum Through the Serverless Workflows 13 Fig. 8 OpenWolf for image processing The designed image processing workflow is composed of five states. Each state represents a function, i.e., a process inside the workflow. Each state is deployed within one of the computing continuum tiers according to the static scheduling rule defined in the workflow manifest. According to the states’ descriptions: Collect exploits a camera stream for collecting environment images. Transform edits the images, cleaning and filtering noisy data. It can be run on any of the continuum’s tiers. Train trains a recurrent neural network (RNN) model used to analyze the collected images. Inference predicts the input image’s label using the latest model produced by the train state. Show pushes the result of the inference over a web page. The first problem we identify on continuum, mostly when FaaS is implemented, is having a good scheduler for deploying functions according to specific quality of services (QoS), i.e., latency, network bandwidth usage, and resource performances. The second problem, which is relied on the first one, is where to put the data. These typically are collected on the edge, but they could be only partially computed on the edge or delivered to the cloud for massive analysis. QoS are directly dependent on the service we are providing in the smart city. For example, road traffic monitoring could require optimizing accuracy, whereas shotgun detection could require real-time analysis. Our proposed solution aims to give the possibility to directly customize what and where data are processed, trying to satisfy any kind of QoS.
  • 25. 14 C. Sicari et al. 400 300 200 100 0 Cloud Edge Continuum 10 40 55 35 1,6 300 35 1,6 55 Performance Comparison Inference Data Fetch Train Fig. 9 Workflow comparison 6.1 Performance Evaluation We evaluated the workflow in three different environments. We considered three key workflows’ moments: (i) training, (ii) data fetching, and (iii) data inference in a full-cloud, a full-edge, and a continuum test bed. In the latter case, cloud nodes were in charge of the model training, whereas edge nodes were focused on collecting and inferencing data. These three functions have been encapsulated inside three different OpenFaaS functions. The dataset adopted is the CIFAR-10, the algorithm was trained for 50 epochs, and the implementation of the RNN was achieved with PyTorch. The data training size is 130 MB, while the data test size, used during the inference, is around 100 MB. As an edge node, we used a single Raspberry Pi 4, with an ARM64 operating system, a 4 GB of RAM, and a 1.5 GHz quad-core processor. As a cloud node, we used a virtual machine with 16 GB of RAM, a 2.8 GHz quad- core processor, and a x64 operating system. Results are shown in Fig. 9. The edge node needed 400% of the cloud perfor- mance for training, and 120% for inferencing data, but the time to use local data is close to zero. On the other hand, cloud requires 45 seconds for transferring data from the edge object storage to the local storage. Moreover, edge device does not require network usage, but cloud will use WAN network for receiving the entire test dataset from the edge object storage. Finally, distributing the computation over the continuum allows exploiting the cloud training time and the edge data locality, avoiding any massive network usage. Unfortunately, as a direct consequence, the inference is done inside the edge, but as shown in Fig. 9, the overall performance in the continuum is better than both the edge and the cloud.
  • 26. Towards the Edge-Cloud Continuum Through the Serverless Workflows 15 7 Conclusion and Future Works In the era of serverless and microservice architecture, workflows are slowly going to gain popularity as a tool to mix serverless services and to deploy them in order to compose complex functions in modern engine infrastructure based on the cloud. Historically, workflows are recognized as a computation chain where the pro- cesses involved depend on the specific field where they are acted. In the last two years, this term instead has started to appear in different fields, like microservices, FaaS, and cloud-edge continuum. In some way, the scientific community shares the idea that workflows allow enabling the cooperation between functions, services, and in general network hosts. This trend is fully reasonable since we managed to deploy functions and services everywhere, just to think to the new concept of “Internet of Everything.” However, we did not manage to link these capabilities together. To try to reach this scope, different open-source and enterprise providers proposed different “linking services,” which have been called FaaS orchestrators. These are of course valid products, but do not trust on a standard and are not integrable, and each of them does not absolve at all to all the requirements a function workflows could ask. In this scenario, we started from scratch, defining the workflow concept. Therefore, we firstly determined what are the elements involved in workflows, i.e., jobs, events, and how they are related together. After that, we defined a designing schema for workflows with clear terms, figures, and data models. Finally, using these tools, we proposed a reference architecture for the management of a workflow platform over a continuum cluster. After the introduction of those glossary terms and architecture patterns, we validate our work presenting OpenWolf, a recently born open-source engine, used to design and develop FaaS workflows for heterogeneous Kubernetes clusters, and we measured its capabilities considering a continuous learning workflow applied to a smart city environment, used to keep under security control a square or a street. This work can be considered a starting point for the serverless workflow field, but we still have to deal with different challenges, like (i) designing the security aspects on the engine, (ii) designing the fault tolerance aspects on the nodes, and (iii) implementing a workflow engine that is able to respect this reference architecture. All these challenges will be faced up in the next future works, with the main intent of providing a usable prototype of a workflow engine platform. References 1. I. Baldini, P. Cheng, S. Fink, N. Mitchell, V. Muthusamy, R. Rabbah, P. Suter, O. Tardieu, The serverless trilemma: function composition for serverless computing, in Proceedings of the 2017 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software (2017), pp. 89–103. https://doi.org/10.1145/3133850.3133855
  • 27. 16 C. Sicari et al. 2. D. Balouek-Thomert, E. Renart, A. Zamani, A. Simonet, M. Parashar, Towards a computing continuum: enabling edge-to-cloud integration for data-driven workflows. Int. J. High Perform. Comput. Appl. 33, 1159–1174 (2019). 3. L. Baresi, D. Filgueira Mendonça, Towards a serverless platform for edge computing, in 2019 IEEE International Conference on Fog Computing (ICFC) (2019), pp. 1–10 4. L. Bittencourt, R. Immich, R. Sakellariou, N. Fonseca, E. Madeira, M. Curado, L. Villas, L. Silva, C. Lee, O. Rana, The internet of things, fog and cloud continuum: integration and challenges. Internet Things 3–4, 135–155 (2018). https://arxiv.org/abs/1809.09972 5. A. Bocci, S. Forti, G. Ferrari, A. Brogi, Type, pad, and place: avoiding data leaks in Cloud- IoT FaaS orchestrations, in 2022 22nd IEEE International Symposium on Cluster, Cloud and Internet Computing (CCGrid) (2022), pp. 798–805 6. R. Boutaba, The Cloud to Things Continuum, Association for Computing Machinery, Virtual Event, Canada (2021). https://dl.acm.org/doi/10.1145/3501255.3501407 7. M. Ciavotta, D. Motterlini, M. Savi, A. Tundo, DFaaS: decentralized function-as-a-service for federated edge computing, in 2021 IEEE 10th International Conference on Cloud Networking (CloudNet) (2021), pp. 1–4 8. I. Cilic, I. Zarko, M. Kusek, Towards service orchestration for the cloud-to-thing continuum, in 2021 6th International Conference on Smart and Sustainable Technologies, SpliTech 2021 (2021) 9. E. Deelman, J. Blythe, Y. Gil, C. Kesselman, G. Mehta, S. Patil, M. Su, K. Vahi, M. Livny, Pegasus: mapping scientific workflows onto the grid. Lecture Notes Computer Science (Including Subseries Lecture Notes Artificial Intelligence Lecture Notes Bioinformatics) 3165, 11–20 (2004) 10. E. Deelman, K. Vahi, M. Rynge, G. Juve, R. Mayani, R. Silva, Pegasus in the cloud: science automation through workflow technologies. IEEE Internet Comput. 20, 70–76 (2016) 11. T. Diethe, M. Kull, N. Twomey, K. Sokol, H. Song, M. Perello-Nieto, E. Tonkin, P. Flach, HyperStream: a workflow engine for streaming data (2019). http://arxiv.org/abs/1908.02858 12. S. Dustdar, V. Pujol, P. Donta, On distributed computing continuum systems. IEEE Trans. Knowl. Data Eng. XX, 1–14 (2022) 13. N. Faria, D. Costa, J. Pereira, R. Vilaça, L. Ferreira, F. Coelho, AIDA-DB: a data management architecture for the edge and cloud continuum, in 2022 IEEE 19th Annual Consumer Communications Networking Conference (CCNC) (2022), pp. 1–6 14. W. Gerlach, W. Tang, A. Wilke, D. Olson, F. Meyer, Container orchestration for scientific workflows, in Proceedings – 2015 IEEE International Conference on Cloud Engineering, IC2E 2015 (2015), pp. 377–378 15. Z. Houmani, D. Balouek-Thomert, E. Caron, M. Parashar, Enabling microservices manage- ment for deep learning applications across the Edge-Cloud Continuum, in 2021 IEEE 33rd International Symposium on Computer Architecture and High Performance Computing (SBAC- PAD) (2021), pp. 137–146 16. A. Jangda, D. Pinckney, Y. Brun, A. Guha, Formal foundations of serverless computing. Proc. ACM Program. Lang. 3, 1–26 (2019). https://dl.acm.org/doi/10.1145/3360575 17. Q. Jiang, Y. Lee, A. Zomaya, Serverless execution of scientific workflows. Lecture Notes Computer Science (Including Subseries Lecture Notes Artificial Intelligence Lecture Notes Bioinformatics), LNCS, vol. 10601, pp. 706–721 (2017) 18. A. Jasinski, Y. Qiao, J. Keeney, E. Fallon, R. Flynn, A workflow engine server for the design of adaptive and scalable workflows, in 30th Irish Signals and Systems Conference, ISSC 2019 (2019) 19. D. Kimovski, R. Mathá, J. Hammer, N. Mehran, H. Hellwagner, R. Prodan, Cloud, fog, or edge: where to compute?. IEEE Internet Comput. 25, 30–36 (2021) 20. D. Kimovski, C. Bauer, N. Mehran, R. Prodan, Big data pipeline scheduling and adaptation on the computing continuum, in 2022 IEEE 46th Annual Computers, Software, and Applications Conference (COMPSAC) (2022), pp. 1153–1158 21. K. Lee, N. Paton, R. Sakellariou, E. Deelman, A. Fernandes, G. Mehta, Adaptive workflow processing and execution in Pegasus, in 2008 The 3rd International Conference on Grid and Pervasive Computing – Workshops (2008), pp. 99–106
  • 28. Towards the Edge-Cloud Continuum Through the Serverless Workflows 17 22. B. Ludäscher, I. Altintas, C. Berkley, D. Higgins, E. Jaeger, M. Jones, E. Lee, J. Tao, Y. Zhao, Scientific workflow management and the KEPLER system. Concurrency Comput. Prac. Exp. 18, 1039–1065 (2006) 23. P. López, A. Arjona, J. Sampé, A. Slominski, L. Villard, Triggerflow: trigger-based orches- tration of serverless workflows, in DEBS 2020 – Proceedings of the 14th ACM International Conference on Distributed and Event-Based Systems (2020), pp. 3–14 24. A. Luckow, K. Rattan, S. Jha, Pilot-edge: distributed resource management along the edge-to- cloud continuum, in 2021 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW) (2021), pp. 874–878 25. M. Malawski, A. Gajek, A. Zima, B. Balis, K. Figiela, Serverless execution of scientific workflows: experiments with HyperFlow, AWS Lambda and Google Cloud Functions. Futur. Gener. Comput. Syst. 110, 502–514 (2020). https://www.sciencedirect.com/science/article/pii/ S0167739X1730047X 26. X. Masip-bruin, E. Marín-tordera, S. Sánchez-lópez, J. Garcia, A. Jukan, A. Ferrer, A. Queralt, A. Salis, A. Bartoli, M. Cankar, C. Cordeiro, J. Jensen, J. Kennedy, Managing the cloud continuum: lessons learnt from a real fog-to-cloud deployment. Sensors 21(9) (2021). https:// www.mdpi.com/1424-8220/21/9/2974 27. X. Merino, C. Otero, D. Nieves-Acaron, B. Luchterhand, Towards orchestration in the cloud- fog continuum, in Conference Proceedings – IEEE SOUTHEASTCON, vol. 2021 (2021) 28. H. Mueller, S. Gogouvitis, H. Haitof, A. Seitz, B. Bruegge, Poster abstract: continuous computing from cloud to edge, in 2016 IEEE/ACM Symposium on Edge Computing (SEC) (2016), pp. 97–98 (2016) 29. D. Mukherjee, D. Pal, P. Misra, Workflow for the Internet of Things, in ICEIS 2017 - Proceedings of the 19th International Conference on Enterprise Information Systems, vol. 2, Porto, Portugal, April 26–29 (2017) 30. A. Morichetta, V. Pujol, S. Dustdar, A roadmap on learning and reasoning for distributed computing continuum ecosystems, in 2021 IEEE International Conference on Edge Computing (EDGE) (2021), pp. 25–31 31. E. Paraskevoulakou, D. Kyriazis, Leveraging the serverless paradigm for realizing machine learning pipelines across the edge-cloud continuum, in 2021 24th Conference on Innovation in Clouds, Internet and Networks and Workshops (ICIN) (2021), pp. 110–117 32. A. Pérez, G. Moltó, M. Caballer, A. Calatrava, Serverless computing for container-based architectures. Futur. Gener. Comput. Syst. 83, 50–59 (2018) 33. T. Pfandzelter, D. Bermbach, tinyFaaS: a lightweight FaaS platform for edge environments, in 2020 IEEE International Conference on Fog Computing (ICFC) (2020), pp. 17–24 34. M. Płóciennik, T. Zok, A. Gómez-Iglesias, F. Castejón, A. Bustos, M. Rodríguez-Pascua, J. Velasco, Workflows orchestration in distributed computing infrastructures, in Proceedings of the 2012 International Conference on High Performance Computing and Simulation, HPCS 2012 (2012), pp. 616–622 35. P. Raith, S. Nastic, S. Dustdar, Serverless edge computing – where we are and what lies ahead. IEEE Internet Comput. 27(3), 50–64 (2023) 36. A. Ranjan, F. Guim, M. Chincholkar, P. Ramchandran, R. Mishra, S. Ranganath, Convergence of edge services edge infrastructure. 2021 IEEE Conference on Network Function Virtual- ization and Software Defined Networks (NFV-SDN) (2021), pp. 96–99 37. E. Renart, D. Balouek-Thomert, M. Parashar, An edge-based framework for enabling data- driven pipelines for IoT systems, in 2019 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW) (2019), pp. 885–894 38. E. Renart, J. Diaz-Montes, M. Parashar, Data-driven stream processing at the edge, in 2017 IEEE 1st International Conference on Fog and Edge Computing (ICFEC) (2017), pp. 31–40 39. E. Renart, D. Balouek-Thomert, X. Hu, J. Gong, M. Parashar, Online decision-making using edge resources for content-driven stream processing, in 2017 IEEE 13th International Conference on E-Science (e-Science) (2017), pp. 384–392 40. D. Rosendo, A. Costan, G. Antoniu, M. Simonin, J. Lombardo, A. Joly, P. Valduriez, Repro- ducible performance optimization of complex applications on the edge-to-cloud continuum, in 2021 IEEE International Conference on Cluster Computing (CLUSTER) (2021), pp. 23–34
  • 29. 18 C. Sicari et al. 41. A. Ruggeri, A. Celesti, M. Fazio, M. Villari, An innovative blockchain-based orchestrator for osmotic computing. J. Grid Comput. 20, 1–17 (2022) 42. C. Sicari, L. Carnevale, A. Galletta, M. Villari, OpenWolf: a serverless workflow engine for native cloud-edge continuum, in He 7th IEEE Cyber Science and Technology Congress (CyberSciTech 2022) (2022) 43. C. Smith, A. Jindal, M. Chadha, M. Gerndt, S. Benedict, FaDO: FaaS functions and data orchestrator for multiple serverless edge-cloud clusters, in 2022 IEEE 6th International Conference on Fog and Edge Computing (ICFEC) (2022), pp. 17–25 44. K. Vahi, M. Rynge, G. Papadimitriou, D. Brown, R. Mayani, R. Silva, E. Deelman, A. Mandal, E. Lyons, M. Zink, Custom execution environments with containers in Pegasus-enabled scientific workflows, in 2019 15th International Conference on EScience (eScience) (2019), pp. 281–290 45. B. Vincenzo et al., Disclosing edge intelligence: a systematic meta-survey. Big Data Cogn. Comput. 7(1), 44 (2023) 46. J. Wen, X. Jin, Rise of the planet of serverless computing: a systematic review; Rise of the planet of serverless computing: a systematic review. ACM Trans. Softw. Eng. Methodol. 32, 1–61 (2023). https://dl.acm.org/doi/10.1145/3579643
  • 30. Firmware Dynamic Analysis Through Rewriting Claudia Greco, Michele Ianni, Antonella Guzzo, and Giancarlo Fortino 1 Introduction The spread of Internet of Things (IoT) devices and their full integration into everyday life is one of the major factors defining the current technology landscape. With the embedding of computational power and persistent connectivity to the Internet, an ever-increasing number of everyday objects are now IoT devices and created new levels of automation, efficiency, and convenience. IoT devices are being used in a wide range of applications and ecosystems, including smart homes, healthcare, transportation, industrial settings, and daily living [1, 2]. By gathering and transmitting data, smart objects are enabling new possibilities for innovation and improvement. Considering their constant use and the access they have to our data, ensuring that these devices are safe is an urgent concern, exacerbated by the fact that they still lack of adequate security and safety measures, putting privacy at risk and making IoT devices increasingly appealing targets for attackers [3]. The vulnerabilities present in IoT devices make them highly susceptible to attacks, and they are frequently viewed as low-hanging fruit by malicious actors, owing to their ease of exploitation [4]. The necessity of conducting a thorough, security-focused evaluation of IoT devices has been well-established [5]. However, conventional analysis methods are often not suitable for the IoT environment: the dynamic analysis of the firmware of these devices typically requires that code is not executed in the device’s native execution environment, but in a controlled one. The reason behind this is manifold. First of all, the use of dynamic analysis on the native device may require expensive hardware which may not be readily available to the analyst C. Greco () · M. Ianni · A. Guzzo · G. Fortino Department of Computer Science, Modeling, Electronics and System Engineering (DIMES), University of Calabria, Arcavacata, Italy e-mail: claudia.greco@dimes.unical.it; michele.ianni@unical.it; antonella.guzzo@unical.it; giancarlo.fortino@unical.it © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 C. Savaglio et al. (eds.), Device-Edge-Cloud Continuum, Internet of Things, https://doi.org/10.1007/978-3-031-42194-5_2 19
  • 31. 20 C. Greco et al. and that may be prone to damage during testing. Also, it can be hard to supply inputs to guide analysis (as happens with fuzzing) and debugging. As explained in [6] using fuzzing, it is hard to detect memory corruption vulnerabilities on low- cost bare-metal devices lacking security mechanisms, because of the lack of visible effects. While, in theory, the possibility of using the device’s ports for debugging exists, these ports are often obscured or inaccessible. Additionally, with physical hardware, it is not feasible to perform concurrent executions, which is essential for dynamic analysis. For this purpose, we rely on the emulation of the firmware, better known as firmware re-hosting. This process separates the firmware from the hardware and emulates it to run on a different architecture without the need for actual hardware. Firmware re-hosting offers several benefits, including the ability to execute in a controlled environment, use debuggers for greater insight, concentrate solely on software components, and benefit from scalability. However, that of firmware re- hosting is a challenging task due to the fact that the firmware frequently retrieves input directly from the device peripherals, which can have their own unique access definitions and different configurations and interfaces. Several solutions have been proposed in literature for firmware re-hosting, based on approaches such as hardware-in-the-loop (HITL), low-level abstractions, learning, or symbolic execution. Despite the significant progress that has been made in the area of automated re-hosting and analysis of firmware, the current solutions come with drawbacks. While the hardware-in-the-loop approach has its advantages, it is often not feasible due to the difficulty or impossibility of obtaining real hardware, or the risk of damaging expensive components during large-scale automated analysis. Approaches based on abstractions usually incorporate binary instrumentation techniques to intercept calls to functions that interact with the hardware. Binary instrumentation adds a substantial overhead to an already slow emulation environment and severely impacts the performance of dynamic vulnera- bility discovery techniques like fuzzing, which requires a high number of executions of binary code to take place. This chapter has different goals: we analyze the current solutions adopted in literature to enable dynamic analysis on the re-hosted firmware and their limitations. We offer our point of view on how progress can be made in this context in order to enable faster vulnerability discovery processes by using traditionally employed techniques such as fuzzing. In particular, we discuss our idea of replacing the device peripherals and the interactions between the firmware and hardware with high-level operations, bringing all the firmware functioning at the software level. This chapter directly addresses a crucial aspect within the broader domain of the device edge cloud continuum. By exploring the security challenges and vulnerabilities of IoT devices, specifically focusing on firmware re-hosting and dynamic analysis, our research aligns with the overarching theme of advancing paradigms, architectures, and applications in this interconnected landscape. The chapter is organized as follows: in Sect. 2, we provide background on the concept of firmware re-hosting and its necessity for dynamic analysis, along with basic notions about vulnerability discovery well-known techniques and different
  • 32. Firmware Dynamic Analysis Through Rewriting 21 levels of analysis. In Sect. 3, we discuss hardware emulation, its challenges, and the limitations of the existing approaches. In Sect. 4, we review the state of the art in firmware emulation and discuss the drawbacks of existing solutions. Finally, in Sect. 5, we describe the idea behind our proposal. 2 Background Program analysis plays a crucial role when dealing with the security assessment of software systems, and, over the years, there has been significant progress in the development of new techniques and methodologies to accomplish this task. Great effort has been put to make the process of program analysis scalable, leading to the development of dynamic analysis tools such as fuzzers and symbolic execution engines that allow to analyze binary programs that, as often occurs, do not come with their source code. Popular examples in the security community of tools for fuzzing and symbolic execution are AFL [7] and angr [8]. However, program analysis tools, especially when performing dynamic analysis, need high levels of parallelism and scalability to function properly, which necessitates moving the execution into an emulated environment. With the advent of IoT, the use of such tools, firstly meant to be applied on desktop and mobile systems, has been extended to various devices and their firmware. The necessity to shift the execution to a virtual environment in the case of firmware is supported by its dependency on the hardware, which limits the ability to observe its behavior. For example, in order to perform security testing of a firmware by means of fuzzing, it would be often necessary to substitute the values derived from peripheral interactions with inputs generated by a fuzzer. It follows the execution must be taken out of its native environment and performed into an emulated and controlled environment, without involving physical components. The process of emulating a firmware in a way that accurately replicates its behavior on real hardware is referred to as re-hosting. Firmware re-hosting allows to thoroughly examine and manipulate firmware in manners that are not feasible on physical hardware and offers many benefits to analysts, including the ability to limit the scope of analysis to software components alone while providing scalability. In this way, no physical embedded component of the device is necessary for the security analysis process to be held, and the program execution can be attached to debuggers, making it possible to get a more in-depth understanding of program execution. Unfortunately, the task of firmware re-hosting is not without challenges, since while running the firmware interacts with peripherals. It follows that in order to gain a properly functioning emulation of the embedded system, firmware re-hosting implies modeling, along the CPU, the behavior of the device peripherals. Peripherals fall in two categories: on-chip peripherals include components such as timers, bus controllers, networking elements, and serial ports, while off-chip peripherals include sensors, actuators, external storage devices, and other circuit board circuitry that is accessed via on-chip peripherals. Specifically, on-chip peripherals, such as general-
  • 33. 22 C. Greco et al. purpose input/output (GPIO) or bus interfaces like inter-integrated circuit (I2C) and serial peripheral interface (SPI), intermediate the firmware and off-chip peripheral communications and are typically controlled by the CPU through memory-mapped input/output (MMIO), allowing programs to access them via memory. The absence of these components can result in the firmware crashing or producing outcomes that deviate from those generated when real hardware is employed, and since a good many of system functions involve interactions with both on-chip and off-chip peripherals, realizing a proper emulation of them is vital. 2.1 Vulnerability Discovery Techniques As we stated, there are some popular dynamic analysis techniques employed during the vulnerability discovery process of a system that benefit from its emulation. In this section, we briefly outline some of the most popular dynamic analysis methodologies. 2.1.1 Fuzzing The goal of fuzzing is to identify inputs that cause the program to behave in unexpected ways or even crash, revealing the presence of vulnerabilities, bugs, or other security issues in the programs that could be exploited by attackers. During fuzzing, a large number of values randomly or semi-randomly generated by a fuzzer are given as input to the target program, which is then monitored for possible crashes, errors, or unexpected outputs. Popular fuzzers are AFL [7], libFuzzer,1 and Honggfuzz.2 2.1.2 Concolic Execution combines symbolic execution and concrete execution to analyze the behavior of computer programs. It runs the program with concrete inputs while maintaining a symbolic state. It is also known as dynamic symbolic execution, and it is different from static symbolic execution as it explores only one path at a time, determined by the concrete inputs. To explore different paths, the technique “flips” path constraints and uses a constraint solver to calculate concrete inputs that result in alternative branches. 1 https://llvm.org/docs/LibFuzzer.html 2 https://github.com/google/honggfuzz
  • 34. Firmware Dynamic Analysis Through Rewriting 23 2.1.3 Binary Instrumentation is used to gather information about the behavior and performances of an executable by adding code to the compiled binary. Instrumentation is meant to track and record data about program execution, such as function call information, memory accesses, and performance metrics, which can be used for a variety of purposes, such as debugging, testing, profiling, and other security analysis. The instrumentation process is performed at the machine code level, making it platform-independent, and can be used to analyze a wide range of software, including low-level system code, firmware, and high-level applications. However, binary instrumentation can also introduce overhead at the program execution, as the added code can slow down the program and consume more memory. 2.2 Analysis Levels The software security analysis of an embedded system can be performed at different levels: full-system, process level, or application level. – With full-system emulation, the firmware is meant to run inside a virtual envi- ronment recreated by an emulator, which is supposed to mimic any component of the original system, from the processor to the hardware peripherals. This approach is able to test the system in every aspect, allowing to generate the same data and behavior of the original target system, but it is the hardest to achieve, as well as slower when compared to other levels of emulation. Full- system emulation is possible by means of base emulators such as QEMU and Simics, although they only provide a small set of peripherals, which does not cover the wide and diverse range of possible hardware in embedded systems. – The analysis at process level allows the emulation of specific processes’ behavior inside the target system. The execution of the processes can be performed inside the native system or a different hardware platform with an operating system providing an execution environment that resembles the native one. Emulators such as QEMU and Simics allow the process level analysis though the user mode emulation. Process level emulation is faster than the full-system emulation; however, the results of the analysis may differ from the reality if the emulated execution environment is not faithful to the original, thus compromising the vulnerability discovery process. – Analysis at the application level consists in analyzing the single application that can run in the native target system. This can be done both statically, by extracting application-specific data, and dynamically, by running the application itself. The limitation of the static approach is that reducing the analysis to the evaluation of statically extracted data can detect existing vulnerabilities in the specific application, but not within the system that interacts with that application. In the case of dynamic analysis, the execution is usually carried in the native
  • 35. 24 C. Greco et al. execution environment. This analysis level is faster than the others, since it does not require the emulation of the system or process; however, the emulation may not be accurate if the target application depends on native hardware features not supported by the execution environment. A problem encountered when full-system emulation is not involved is that the host platform used to run the firmware is not necessarily capable of supporting the use of dynamic analysis tools. This results from the fact that IoT systems are generally lightweight and have reduced computing and storage resources, compared to traditional systems. 3 Motivation Firmware re-hosting in IoT devices constitutes a significant challenge due to the wide heterogeneity involved in both hardware and software components, especially in comparison to desktop and mobile systems where the standardized execution environments and limited number of operating systems and architectures make the issue much easier to handle. The creation of a single generic emulator that is capable of hosting transparently a certain firmware turns out to be a highly impractical goal due to the remarkable diversity of existing embedded systems and architecture designs, as well as the proprietary nature of some chip designs. This diversity results from the combination of various hardware architectures (x86, ARM, MIPS, and so on), different types of embedded peripherals, multiple operating systems, and customized configurations and interfaces. The conjunction of these factors leads to a long list of realizable embedded systems, making it challenging to design a general emulator for firmware re-hosting. These challenges have encouraged the development of a variety of emulation solutions. A widely adopted approach is the hardware-in-the-loop (HITL) [9–12], in which the firmware is only partially emulated and whenever an unsupported I/O operation is attempted, the request is redirected to the hardware itself. In HITL, the firmware interacts with a hardware platform that mimics the real hardware. The platform might be an actual piece of hardware or a hardware simulator and provides the necessary peripheral interfaces and inputs/outputs that the firmware requires to function correctly. This lessens the need for access to the real target hardware and enables testing and evaluation of the firmware in a controlled environment. Various studies employ operating system and/or hardware abstractions [13–15] to take advantage of the abstraction layer provided by the firmware. An operating system abstraction is a layer of software that provides a standard interface for the firmware to interact with the hardware, while a hardware abstraction provides a similar layer of abstraction, but it is specific to the hardware being used. Such abstractions provide high-level representations of the underlying hardware, thus enabling the firmware to interact with the hardware in a manner that is independent
  • 36. Firmware Dynamic Analysis Through Rewriting 25 of the actual hardware’s implementation. Conversely, other studies aim for full- system emulation [16–18] without the presence of actual hardware. With full-system emulation, the behavior of an entire embedded system, including the firmware and the underlying hardware, is recreated in a virtual environment. These works focus on the automated creation of models that describe the interactions between firmware and hardware, allowing for the replaying of these interactions without a direct connection to the specific device or even the learning of these interactions through models generated from recorded real interactions. Once the software and hardware models have been created, they can be integrated into a virtual environment that simulates the behavior of the real system. The virtual environment can be used to test and evaluate the firmware in a manner that is independent of the underlying hardware. This makes it possible to test and evaluate firmware on different hardware platforms, or even on platforms that do not yet exist, without the need for access to the actual hardware. Although much improvement has been made to handle the firmware re-hosting challenge, the current approaches come with limitations. HITL-based solutions are effective in enabling interactivity and allowing testers to utilize dynamic analysis tools to input data into the firmware. However, this method introduces latency in the forwarding process, thus impeding the execution speed, reducing parallelism and scalability, and limiting its performance as a testing approach. Furthermore, HITL still entails a substantial tie between the firmware and hardware. Methods that rely on operating system or hardware abstractions exceed the HITL drawbacks, though these approaches have limitations in terms of the types of firmware they can handle. Indeed, in order to accommodate a broad spectrum of firmware, it is crucial for emulators to be devoid of high-level abstractions. Finally, learning- based solutions still require interactions with real hardware to collect data on the peripherals’ behavior. To overcome the limitations of the currently adopted methodologies for re-hosting, we propose a novel approach that takes advantage of binary rewriting techniques. Binary rewriting is meant to modify the behavior of a compiled program without having its source code or recompiling it while keeping the binary executable. Binary rewriting can be classified as either static or dynamic. Static binary rewriting involves making modifications to the binary file and saving them permanently, while dynamic binary rewriting modifies the binary while it’s being executed, without making any permanent change. Several binary rewriting methodologies have been proposed in literature, includ- ing both static [19–23] and dynamic [24–28] solutions. Binary rewriting can be applied in a variety of ways, such as monitoring a program during execution, optimizing it, and emulating it [29]. Finally, many proposals build the emulation on top of base emulators such as QEMU [30] and Simics [31]. The exclusive use of these tools for hardware emu- lation is indeed discouraged since they are only able to emulate a restricted list of CPUs and peripherals and support a limited number of possible configurations [30] and may also require significant analyst intervention [31].
  • 37. 26 C. Greco et al. 4 State-of-the-Art Approaches and Their Limitations Firmware re-hosting has drawn a lot of interest in the literature, and various works came up introducing solutions to address the hardware emulation challenge with the ultimate goal of enabling faster security assessment of firmware. HITL-Based Approaches Several works such as Avatar [9], Avatar2 [32], PROSPECT [10], SURRO- GATES [11], Inception [33], and Charm [12] pursue the HITL approach, proposing partially emulation of firmware with unsupported I/O requests redirected to real peripherals, which still implies a strong dependence from the hardware. These works propose, in order to conduct dynamic analysis, to partially offload the execution of firmware to actual hardware, thus compromising scalability. Abstraction-Based Approaches Other works design emulation on top of OS abstractions [13, 14] or hardware abstraction layers (HALs) [15]. Firmadyne [13] is a full-system emulation tool for automated large-scale dynamic analysis of firmware. When a firmware image is provided to Firmadyne, the tool extracts the file system and performs an analysis to determine the hardware specifics. Then, a pre-built Linux kernel that corresponds to these specifics is employed, and an initial emulation is conducted using QEMU to infer the system and network configuration. Costin et al. [14] present a framework for scalable security testing of embedded web interfaces using dynamic analysis tools. The framework relies on the emulation of firmware images via QEMU, by replacing the system native kernel with a default kernel – for a specific CPU architecture – supported by QEMU. The limit of relying on OS abstraction is that only a reduced number of firmware are supported by the framework; indeed only Linux-based firmware is handled by both [13, 14]. HALucinator [15] relies on a technique called high-level emulation (HLE) to perform dynamic analysis on firmware in embedded systems. The authors leverage the use of hardware abstraction layers (HALs) commonly used by firmware developers to simplify their jobs, as a basis for re-hosting and analyzing firmware. The technique works by first identifying the library functions responsible for hardware interactions in a firmware image and then providing high-level replacements for these functions in a full-system emulator such as QEMU. The authors demonstrated the practicality of HLE for security analysis by supplementing their prototype system, HALucinator, with a fuzzer to locate multiple previously unknown vulnerabilities in firmware middleware libraries. In [34] a technique called para-rehosting to make re-hosting of microcontroller (MCU) software to commodity hardware smoother is proposed. The authors implemented a portable MCU (PMCU) using the POSIX interface, which models common functions of the MCU cores and accurately replicates the common behaviors of real MCUs. They abstracted and modeled common functions of MCU cores and proposed HAL-based peripheral function replacement, in which high- level hardware functions are replaced with an equivalent back-end driver on the host, allowing for incremental plug-and-play library porting. Both [15] and [34] present
  • 38. Firmware Dynamic Analysis Through Rewriting 27 interactive and hardware-independent environments. However, these environments are built over the assumption that the firmware relies on HALs, which may not always be the case. In order to be able to deal with a wider range of firmware, emulators should be abstraction-free, meaning that they should not depend on high- level constructs. Learning-Based Approaches Other approaches have been proposed to automatically create emulators for firmware, which require knowledge of how the firmware interacts with the peripherals, by capturing and reproducing data generated during I/O interactions to model the hardware behavior. This allows large-scale and interactive executions, but inevitably necessitates trace recording from within the device itself, thereby restricting accurate execution in the emulator to only the recorded program paths. In [16–18, 35, 36] the peripherals’ behavior is learned and the interactions between the firmware and hardware are modeled in order to enable virtualized execution of firmware without implement peripheral emulators at all. Pretender [16] and Conware [35] gather a set of observations of the low-level interactions between the firmware and the original peripherals by means of HITL or code instrumentation. Pretender then utilizes machine learning to generate models of the memory- mapped input/output (MMIO) operations and interrupt-driven peripherals, which can replace the physical hardware. In contrast, Conware generates composable automata representations based on the collected recordings to model the peripherals, which can be merged to build generalized models. Both Pretender and Conware intend to emulate arbitrary firmware without having to instrument the actual firmware; however, they imply an access to the physical hardware during the training phase in order to gather the observations needed for the emulation and require instrumentation to detect interactions with the hardware. Also, Pretender only provides models for interrupt-based firmware that therefore are not generic and suitable for non-interrupt-based firmware. P2IM [17] is a framework for MCU firmware approximate emulation that does not provide model for the physical peripherals themselves, but treats them as black boxes. Whenever the running firmware requests interactions with the peripherals, it is provided with acceptable inputs that simply satisfy internal checks and do not cause the execution to halt or crash. P2IM does not require a deep knowledge about the peripherals behavior, allowing only a small set of values to be considered as possible inputs to the firmware. Although this functioning allows hardware-independent emulation, it reduces the ability to represent effectively complex firmware logic. Symbex-Based Approaches Other works achieve firmware re-hosting by means of symbolic execution [18, 36, 37]. Laelaps [18] and Jetset [36] propose a symbolic execution-based approach to infer the behavior of the peripheral expected by the firmware. Laelaps is a concolic execution-based firmware re-hosting framework that combines concrete and symbolic execution. It uses a full-system emulator such as QEMU to run the firmware and gain the inner state of the execution and switches to symbolic execution whenever an access to an unimplemented peripheral is attempted, in
  • 39. 28 C. Greco et al. Table 1 Strengths and limitations of existing approaches Approaches Articles Strengths Limitations HITL [9], [32], [10], [11], [33], [12] .− Interactivity; .− Dynamic analysis enabled .− Hardware dependency .− Limited scalability .− Latency Abstraction-based [13], [14], [15] .− Hardware independence .− Abstractions not always available Learning-based [16–18, 35, 36] .− Hardware independence after training .− Hardware dependency during dataset recording Symbex-based [18, 36, 37] .− Extensive path exploration .− Possibility to automate test case generation .− Potential path explosion .− Problematic complex behavior modeling order to find valid input that leads the execution to a path that resembles a realistic behavior. In order to prevent the path explosion, Laelaps relies on the Context Preserving Scanning Algorithm (CPSA) heuristics, which are able to infer inputs valid for the near future execution, but which may cause execution crash in the long term. Jetset is a tool that relies on symbolic execution for inferring how the peripheral devices are expected to behave in the interaction with the firmware. These inferred are used while the firmware is running into an emulator such as QEMU in order to reproduce a device target functionality. Path explosion is mitigated using guided symbolic execution with a variation of Tabu Search to minimize the distance to the goal. However, following this approach, at each branch, the direction to take is chosen by looking at the distance to the goal, which can make it difficult to model more complex behaviors. .μEmu [37] uses symbolic execution to extract valuable information and to build a knowledge base that is used to emulate the peripherals behavior during firmware re-hosting. As it carries the knowledge extraction process, .μEEmu tries to avoid the path explosion by switching to another path only when the current one is found invalid. However, it fails to emulate complex peripherals’ behaviors (Table 1). 5 Our Approach We propose an approach to enable dynamic security assessment techniques such as fuzzing on firmware which extends and somehow redefines our proposal in [38]. Our proposal relies on binary rewriting to obtain a full-system emulation of firmware, ensuring hardware independence as well as interactivity and overcoming the limitations experienced in the current approaches. By providing innovative insights and practical solutions to the security concerns surrounding IoT devices, this chapter contributes directly to the advancement of knowledge and practices within the realm
  • 40. Firmware Dynamic Analysis Through Rewriting 29 of the device edge-cloud continuum. Our research underscores the significance of addressing vulnerabilities in IoT devices and highlights the potential for improved paradigms, architectures, and applications in ensuring the integrity and resilience of this interconnected ecosystem. Besides the drawbacks already discussed in Sect. 3, most of the current approaches involve the use of binary instrumentation to intercept the invocations to functions related to I/O interactions with peripherals and forward them to their replacement models. Since the instrumented code is meant to be executed in a virtual environment outside the firmware execution environment, its use is significantly slowing down the process. In our proposal, we completely bypass that step by fully replacing the interactions between the firmware and the hardware with code. We integrate the embedded peripherals’ behavior at a high level by firmware rewriting, avoiding the involve- ment of lower-level abstractions such relying on OS assumptions or using HALs. In this way, we enable the application of vulnerability assessment techniques based on a large number of executions and possibly crashes of the binaries under analysis. The process involves the following steps: (i) Portions of the binary code that constitute interactions with the peripherals must be identified. This step can be accomplished through various methods such as manual reverse engineering, debug- ging firmware in a hardware-in-the-loop environment, or locating HAL functions via library matching. (ii)Once these functions are identified, their behavior must be rewritten to ensure successful emulation of the firmware, which is a challenging task that relies on manual development due to its difficulty to automate. However, literature suggests that several approaches can be used to automatically develop models to replace the original functions by recording firmware interactions with hardware peripherals, as discussed in Sect. 4. (iii)To perform the actual replacement of functions that interact with hardware with ad hoc implementations, we rely on binary rewriting techniques. This is the most significant aspect of our proposal as it avoids intercepting calls using binary instrumentation approaches, thereby significantly speeding up the emulation process. The binary rewriting step can be performed in several ways, some of which have already been introduced in Sect. 4. The most significant benefit of rewriting, compared to current approaches, is that the firmware can be treated as normal software from an emulation perspective. As a result, the entire re-hosting process is much faster, and vulnerability assessment techniques such as fuzzing can be easily adopted without incurring excessive slowdown due to binary instrumentation. In fact, what happens during the normal operation of a device is that numerous interactions occur with the hardware. By choosing to use binary instrumentation techniques to identify these interactions, the peripheral emulation process undergoes a significant slowdown. For example, consider an industrial control system (ICS) that leverages REST APIs to expose the status of a humidity sensor. In particular, when a request for humidity data is held through such APIs, the ICS calls a library function supplied by the manufacturer, which in turn invokes another function meant to interact with the peripherals, which is an HAL function provided by the microcontroller vendors,
  • 41. 30 C. Greco et al. ICS Firmware Application Middleware HAL Hardware REST Rewritten Functions Network Library Humidity Sensor Library Network HAL Serial HAL Network Device Serial Device Replaced by Fig. 1 Example: ICS rewritten firmware able to read the humidity value from the chip, by means of a serial communication. It is well known that the REST service relies on the HTTP protocol, which is implemented on top of a TCP communication that in our example is put on in a library using an HAL to communicate with the Ethernet port. Even considering such a simple scenario, we can identify several interactions between firmware and hardware, e.g., function calls to the microcontroller HAL and communication with the Ethernet port. Our idea consists in replacing these functions with custom implementations, in order to be able to dissolve the firmware-hardware bonds and achieve complete independence from the underlying hardware, as illustrated in Fig. 1. By rewriting the functions related to I/O interaction, we are able to achieve firmware emulation without the need of instrumenting the emulation.
  • 42. Firmware Dynamic Analysis Through Rewriting 31 6 Conclusions and Future Work We inspected the state-of-the-art of firmware re-hosting for vulnerability assessment purpose, and we analyzed extensively the strengths and weaknesses of existing solutions. In particular, we have analyzed how the use of dynamic analysis tech- niques that require a high number of executions to be performed quickly as occurs in fuzzing turns out to be impractical. Our research offers an alternative solution that utilizes binary rewriting to address the firmware re-hosting challenges, thereby enabling firmware to be tested using dynamic analysis techniques more efficiently. The chapter aims to initiate a dialogue in the field of rapid firmware fuzzing and posits that the proposed methodology can enhance the existing vulnerability assess- ment methodologies of analysts. Currently, we are in the process of implementing the proposed approach and identifying the optimal techniques to employ at each stage of the rewriting process. Our preliminary findings are promising and show the effectiveness of our proposal. References 1. G. Fortino, A. Guzzo, M. Ianni, F. Leotta, M. Mecella, Exploiting marked temporal point processes for predicting activities of daily living, in 2020 IEEE International Conference on Human-Machine Systems (ICHMS) (IEEE, 2020), pp. 1–6 2. G. Fortino, A. Guzzo, M. Ianni, F. Leotta, M. Mecella, Predicting activities of daily living via temporal point processes: approaches and experimental results. Comput. Electr. Eng. 96, 107567 (2021) 3. G. Fortino, A. Guerrieri, P. Pace, C. Savaglio, G. Spezzano, IoT platforms and security: an analysis of the leading industrial/commercial solutions. Sensors 22(6), 2196 (2022) 4. Y. He, Z. Zou, K. Sun, Z. Liu, K. Xu, Q. Wang, C. Shen, Z. Wang, Q. Li, {RapidPatch}: firmware hotpatching for {Real-Time} embedded devices, in 31st USENIX Security Sympo- sium (USENIX Security 22) (2022), pp. 2225–2242 5. A. Guzzo, M. Ianni, A. Pugliese, D. Saccà, Modeling and efficiently detecting security-critical sequences of actions. Futur. Gener. Comput. Syst. 113, 196–206 (2020) 6. M. Salehi, L. Degani, M. Roveri, D. Hughes, B. Crispo, Discovery and identification of memory corruption vulnerabilities on bare-metal embedded devices. IEEE Trans. Dependable Secure Comput. 20(2), 1124–1138 (2022) 7. [Online]. Available: https://lcamtuf.coredump.cx/afl/ 8. Y. Shoshitaishvili, R. Wang, C. Salls, N. Stephens, M. Polino, A. Dutcher, J. Grosen, S. Feng, C. Hauser, C. Kruegel, G. Vigna, SoK: (state of) the art of war: offensive techniques in binary analysis, in IEEE Symposium on Security and Privacy (2016) 9. J. Zaddach, L. Bruno, A. Francillon, D. Balzarotti et al., Avatar: a framework to support dynamic security analysis of embedded systems’ firmwares, in NDSS, vol. 14 (2014), pp. 1–16 10. M. Kammerstetter, C. Platzer, W. Kastner, Prospect: peripheral proxying supported embedded code testing, in Proceedings of the 9th ACM Symposium on Information, Computer and Communications Security (2014), pp.329–340 11. K. Koscher, T. Kohno, D. Molnar, {SURROGATES}: enabling {Near-Real-Time} dynamic analyses of embedded systems, in 9th USENIX Workshop on Offensive Technologies (WOOT 15) (2015)
  • 43. 32 C. Greco et al. 12. S.M.S. Talebi, H. Tavakoli, H. Zhang, Z. Zhang, A.A. Sani, Z. Qian, Charm: facilitating dynamic analysis of device drivers of mobile systems, in 27th USENIX Security Symposium (USENIX Security 18) (2018), pp. 291–307 13. D.D. Chen, M. Woo, D. Brumley, M. Egele, Towards automated dynamic analysis for linux- based embedded firmware, in NDSS, vol. 1 (2016), pp. 1–1 14. A. Costin, A. Zarras, A. Francillon, Automated dynamic firmware analysis at scale: a case study on embedded web interfaces, in Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security (2016), pp. 437–448 15. A.A. Clements, E. Gustafson, T. Scharnowski, P. Grosen, D. Fritz, C. Kruegel, G. Vigna, S. Bagchi, M. Payer, {HALucinator}: firmware re-hosting through abstraction layer emulation, in 29th USENIX Security Symposium (USENIX Security 20) (2020), pp. 1201–1218 16. E. Gustafson, M. Muench, C. Spensky, N. Redini, A. Machiry, Y. Fratantonio, D. Balzarotti, A. Francillon, Y.R. Choe, C. Kruegel et al., Toward the analysis of embedded firmware through automated re-hosting, in 22nd International Symposium on Research in Attacks, Intrusions and Defenses (RAID 2019) (2019), pp. 135–150 17. B. Feng, A. Mera, L. Lu, {P2IM}: scalable and hardware-independent firmware testing via automatic peripheral interface modeling, in 29th USENIX Security Symposium (USENIX Security 20) (2020), pp. 1237–1254 18. C. Cao, L. Guan, J. Ming, P. Liu, Device-agnostic firmware execution is possible: a concolic execution approach for peripheral emulation, in Annual Computer Security Applications Conference (2020), pp. 746–759 19. E. Bauman, Z. Lin, K.W. Hamlen et al., Superset disassembly: statically rewriting x86 binaries without heuristics, in NDSS (2018) 20. J.R. Larus, T. Ball, Rewriting executable files to measure program behavior. Softw.: Pract. Experience 24(2), 197–218 (1994) 21. G. Ravipati, A.R. Bernat, N. Rosenblum, B.P. Miller, J.K. Hollingsworth, Toward the deconstruction of dyninst. University of Wisconsin, Technical report, vol. 32, 2007 22. D.W. Wall, Systems for late code modification, in Code Generation–Concepts, Tools, Tech- niques: Proceedings of the International Workshop on Code Generation (Springer, London, 1992), pp. 275–293 23. L. Van Put, D. Chanet, B. De Bus, B. De Sutter, K. De Bosschere, Diablo: a reliable, retargetable and extensible link-time rewriting framework, in Proceedings of the Fifth IEEE International Symposium on Signal Processing and Information Technology, 2005 (IEEE, 2005), pp. 7–12 24. K. Scott, J. Davidson, Strata: a software dynamic translation infrastructure, in IEEE Workshop on Binary Translation (2001) 25. C. Cifuentes, B. Lewis, D. Ung, Walkabout-a retargetable dynamic binary translation frame- work, in Workshop on Binary Translation (2002), pp. 22–25 26. J.K. Hollingsworth, B.P. Miller, J. Cargille, Dynamic program instrumentation for scalable performance tools, in Proceedings of IEEE Scalable High Performance Computing Conference (IEEE, 1994), pp. 841–850 27. B. Buck, J.K. Hollingsworth, An API for runtime code patching. Int. J. High Perform. Comput. Appl. 14(4), 317–329 (2000) 28. C.-K. Luk, R. Cohn, R. Muth, H. Patil, A. Klauser, G. Lowney, S. Wallace, V.J. Reddi, K. Hazelwood, Pin: building customized program analysis tools with dynamic instrumentation. ACM SIGPLAN Not. 40(6), 190–200 (2005) 29. M. Wenzl, G. Merzdovnik, J. Ullrich, E. Weippl, From hack to elaborate technique–a survey on binary rewriting. ACM Comput. Surv. (CSUR) 52(3), 1–37 (2019) 30. F. Bellard, QEMU, a fast and portable dynamic translator. in USENIX Annual Technical Conference, FREENIX Track, vol. 41, no. 46. Califor-nia, USA (2005), pp. 10–5555 31. P.S. Magnusson, M. Christensson, J. Eskilson, D. Forsgren, G. Hallberg, J. Hogberg, F. Lars- son, A. Moestedt, B. Werner, Simics: a full system simulation platform. Computer 35(2), 50–58 (2002)
  • 44. Firmware Dynamic Analysis Through Rewriting 33 32. M. Muench, D. Nisi, A. Francillon, D. Balzarotti, Avatar 2: a multi-target orchestration platform, in Proceedings Workshop Binary Analysis Research (Colocated NDSS Symposium), vol. 18 (2018), pp. 1–11 33. N. Corteggiani, G. Camurati, A. Francillon, Inception: {System-Wide} security testing of {Real-World} embedded systems software, in 27th USENIX Security Symposium (USENIX Security 18) (2018), pp. 309–326 34. W. Li, L. Guan, J. Lin, J. Shi, F. Li, From library portability to para-rehosting: natively execut- ing microcontroller software on commodity hardware (2021). arXiv preprint arXiv:2107.12867 35. C. Spensky, A. Machiry, N. Redini, C. Unger, G. Foster, E. Blasband, H. Okhravi, C. Kruegel, G. Vigna, Conware: automated modeling of hardware peripherals, in Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security (2021), pp. 95–109 36. E. Johnson, M. Bland, Y. Zhu, J. Mason, S. Checkoway, S. Savage, K. Levchenko, Jetset: targeted firmware rehosting for embedded systems, in 30th USENIX Security Symposium (USENIX Security 21) (2021), pp. 321–338 37. W. Zhou, L. Guan, P. Liu, Y. Zhang, Automatic firmware emulation through invalidity-guided knowledge inference, in USENIX Security Symposium (2021), pp. 2007–2024 38. G. Fortino, C. Greco, A. Guzzo, M. Ianni, Enabling faster security assessment of re-hosted firmware, in 2022 IEEE International Conference on Dependable, Autonomic and Secure Computing, International Conference on Pervasive Intelligence and Computing, International Conference on Cloud and Big Data Computing, International Conference on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech) (IEEE, 2022), pp. 1–6
  • 45. Performance Analysis of a Blockchain for a Traceability System Based on the IoT Sensor Units Along the Agri-Food Supply Chain Maria Teresa Gaudio, Sudip Chakraborty, and Stefano Curcio 1 Introduction The agri-food supply chain can be seen as a complex system of systems (SOS) [1], and a traceability system along the entire supply chain seems challenging to realize. Today, different technologies exist to ensure traceability [2–8], but at the same time, some critical points remain and influence the reliability of the entire system. In particular, each different product follows a specific supply chain with its requirements and constraints. Most of the critical points are represented by the interactions between different actors involved in the supply chain, where the risk can increase due to the lower automated layers of protection and thus, malicious [9] – people or others – could intervene, causing fraud and damage to the final product and the entire supply chain. The insertion of the IoT sensor unit in correspondence with these critical points could be interesting concerning the real-time monitoring and consequent checking of the process and at the same time, the possible integration with blockchain technology. This last perfectly responds to the four steps of a traceability system – identification, recording, data links, and report [10, 11] – with its fundamental principles: immutability and transparency, disintermediation and provenance, and trust and agreement [12]. Moreover, blockchain technology could represent a solution that can be implemented in all specific supply chains [13, 14]. This works referred to a multilayered solution for the agri-food supply chain traceability proposed in [15]. In this paper, a more in-depth description of the blockchain setup technology in the Hyperledger Fabric environment was described, and the main results of transaction simulation were presented. Compared to existing M. T. Gaudio () · S. Chakraborty · S. Curcio Università della Calabria, Rende, Italy e-mail: mariateresa.gaudio@unical.it; sudip.chakraborty@unical.it; stefano.curcio@unical.it © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024 C. Savaglio et al. (eds.), Device-Edge-Cloud Continuum, Internet of Things, https://doi.org/10.1007/978-3-031-42194-5_3 35
  • 46. 36 M. T. Gaudio et al. technologies, the multilayer proposed solution is referred to as the whole agri-food supply chain system, and not to a single stage of the specific supply chain. 2 Step-by-Step Hyperledger Blockchain Setting To consider a generic scheme solution for all types of supply chains, the entire agri- food supply chain was considered with only three main different actors involved: farmer, manufacturer, and distributor. The manufacturer considers the producing and packaging phase as the only one, and the waste management unit is neglected, considering the real hypothesis of zero waste in the supply chain. Using Hyperledger Fabric, these hypotheses are more readily accepted, because Hyperledger technology is the most scalable among existing blockchain solutions. Therefore, when it will be necessary for the next, the adding of other organi- zations will be possible and easy to adopt, if the hyperledger initial setup will be performed correctly. Moreover, hyperledger can be set as public or private, both permissioned and permissionless; i.e., it provides very high modularity, capable of adapting to any need [16]. Unlike other blockchain platforms, hyperledger is an open source hosted by Linux Foundation. The main problem could be the high memory capacity that the machine in use must have to be able to memorize the blocks of the chain, to which new information will be added each time. For this reason, it was decided to start with the installation of the blockchain solution on the virtual machines (VMs), in order to preserve the integrity of the machine and at the same time, use a cloud solution to store all data and blocks. Obviously, also this choice will involve a cost, but at the same time, this could be managed in the best possible way from time to time, without going to wear out the machine. This paragraph shows the step-by-step working procedure for the hyperledger blockchain solution used. 2.1 General Architecture The general architecture proposed for the agri-food supply chain traceability con- sists of the management of information from farm to fork, i.e., from the agricultural phase to the end consumer. The tracking and reconstruction of the information flow passes through the interactions between the different actors involved in the entire supply chain. For this general application, the blockchain chosen is an open-permissioned Hyperledger Fabric blockchain. And starting from this case study, extra-virgin olive oil was considered by four organizations: – Organization 1 (Org1) involves the interactions between the manufacturing process and the agricultural phase, where the raw material comes from.
  • 47. Perf Anlys of a BC for a traceability sys based on IoTs alg the Agri-Food SC 37 – Organization 2 (Org2) involves the interactions between the final product coming from the manufacturing process and the distribution phase. – Organization 3 (Org3) refers to the interactions involved in the product recogni- tion, e.g., in the store or a point of consumption. – Organization 4 (Org4) is the orderer organization. Each organization has two peers, while the orderer organization has three orderer peers. The ordering service is one of the main features of Hyperledger Fabric. It guar- antees transaction ordering. In fact, Fabric relies on deterministic – not probabilistic – consensus algorithms; thus, any block validated by the peer is guaranteed to be final and correct. Moreover, separating the endorsement of chaincode execution (which happens at the peers) from ordering gives Fabric advantages in performance and scala- bility, eliminating bottlenecks which can occur when execution and ordering are performed by the same nodes. In brief, this project has three organizations defined which can contain the information – regarding farming, manufacturing, and distribution – a single orderer and a single channel for this business network. The entities will interact with the blockchain application by invoking chaincode in the fabric network, updating the ledger world state, and writing transaction logs. The blockchain network was set up through the use of 4 Google VMs. In the creation of VMs, it is convenient to identify the geographical area that allows us to spend less. Among all platforms, Google was chosen thinking about the possibility by a company already has a Google account, and thus, there is no need to open another accounting service on another platform. Moreover, Google already returns some performance evaluations inherent to the machine. As an operating system, Ubuntu 18.04 LTS was installed on each VM. All VMs have two vCPU and 4 GB of memory. In contrast, VM1 has 8 GB because it is the one that will store more data from different operations: the network configuration, the creation of the crypto-material for each organization, the creation of the channel artifacts, and the creation of a docker swarm network. In this network, all ORGs can interact in one channel (see Fig. 1). The channel was generically called “mychannel.” Docker Compose was used to launch the corresponding fabric containers, and as the first step, the services run in the container are defined in a Docker Compose YAML file. The installation of Hyperledger Fabric was carried out according to the indica- tions of Hyperledger Fabric manual version 2.3 [17], which already contains all packages’ installation about prerequisites, among valuable docker for managing the containers. The first thing to do is the creation of the artifacts folder for the channel configuration. In a config folder, different configuration YAML files are present, and they are used to track what kind of flow of information is necessary to follow, i.e., which chaincode deployment you want to perform. To make it, Membership Service Provider (MSP) is necessary for each organization.
  • 48. 38 M. T. Gaudio et al. Org1 Org2 Org3 Org4 Interactions between manufacturing and farming Interactions between manufacturing and distribution Interactions involved in the final consumer recognition Orderer Organization Channel Docker Swarm Network Fig. 1 Hyperledger Fabric architecture for a generic agri-food supply chain First of all, let us start with the creation of the crypto-materials for each organization. Thanks to the Docker Compose YAML file and a shell script file (.sh) – written in a bash language and executable directly in the terminal – the certificate authorities (CAs), MSP, and Transport Layer Security (TLS) certificates are generated in a special crypto-config folder for each organization. After the certificate’s creation, it is time to create Genesis block and channel transaction files. Through another shell script file, named “create-artifacts,” the Genesis block, Genesis channel configuration, and Anchor Peer transactions are generated. This file configures a single anchor peer for each organization. The anchor peer is more critical in Fabric, because it is a peer node that enables the communication between peers of different organizations and discovers all active channel participants. At this point, it is essential to share all information created among all different organizations, through the creation of a particular repository on GitHub, or uploading each organization’s certificates to the others or again, creating at the starting point only a first VM and then, cloning its disk and creating from it the other VMs. The creation of a docker swarm network is necessary to orchestrate each instruction in each VM and therefore, in each organization. After the docker swarm network creation, all containers have to run on each organization. It was made through another appropriate Docker Compose YAML file. In this Docker Compose file, each peer has a CouchDB database, where the information is collected. 2.2 Smart Contract Operations In Hyperledger Fabric, the smart contract is called “chaincode,” and for this case study, a specific chaincode, called “Foodchain,” was written in the Go language. Fabric also supports other programming languages, but it was demonstrated that Go is the most powerful for this technology [18].
  • 49. Perf Anlys of a BC for a traceability sys based on IoTs alg the Agri-Food SC 39 Table 1 General foodchain structure algorithm in natural language Algorithm 1 – Foodchain 1: INITIALIZE the executable function through the main package 2: IMPORT all necessary Go packages to execute different functions 3: DEFINE the FoodchainContract function to manage the Asset, i.e., the information coming from IoT Sensor Units 4: DEFINE user-defined type structures to store a collection of the product and the participants 5: INITIALIZE of FoodchainContract 6: INVOKE createProduct function to create the block and to add information for a certain product 7: INVOKE manufactureProcessing function to add information about the manufacturing process 8: INVOKE distributorProcessing function to update information about the distribution state 9: INVOKE query function about reading the status of the product at a given time 10: IF query function reads a consistent status of the product at a given time, it means that information and condition in input to the blockchain are correct, ELSE RETURN error 11: INVOKE setupFoodchainTracer function to record all information in the same block 12: RETURN all information contained in a block Go is a perfect programming language to develop fast and scalable blockchain systems. Go language is not only simple to learn; it also comes with the best features of JavaScript and Python, such as user-friendliness, scalability, stability, and speed – anything that makes it the best choice to offer tailored-made blockchain applications. The Foodchain chaincode is reported and described in natural language, cf. Table 1. The generical algorithm of Foodchain also contains other small functions, which can be written in the same Foodchain or also in the subsequent shell script files for the working of Hyperledger, that is, during the initializing, invoking, and committing of chaincode in different involved organizations, valuable steps to execute the transaction of the asset information. In these shell script files, when a function is invoked on peers, replacing the localhost with the VM’s IP address corresponding to the orderer organization is essential. This replacement is necessary to enable the functionality of the ordering service among the other organizations. In each of these shell script files, before each function, the set of environmental variables and the declaration of paths for each peer of each corresponding organi-
  • 50. Random documents with unrelated content Scribd suggests to you:
  • 51. CAPITULO III. Situacion de la España—Reinado de Don Fernando y Doña Isabel—Anarquía—Guerra cívil—Fanatismo—Restablecimiento de la Inquisicion—Influencia del Clero—Expulsion de los judios y moros—Odios entre España y Portugal. Nos es necesario echar una ojeada sobre el País á que se dirigía Colon y sobre los sucesos de la época en que debía llegar. El reinado de Enrique IV, llamado el impotente, había sido funesto para Castilla; él mismo había abierto las puertas de la mas escandalosa anarquía rebelándose contra su padre. No eran mejores los ejemplos de su vida privada; había agotado las fuerzas de su juventud en la mas desenfrenada crápula. Sin mas sucesion que su hija Juana y aun su legitimidad desconocida al extremo de llamarla el pueblo y los nobles la Beltraneja, á causa de las intimidades ostensibles de Don Beltran de la Cueva con la Reyna, fué este desgraciado vástago en vez de solucion de las cuestiones de sucesion, causa de trastornos y de guerras. De ánimo débil, pasó por sucesivas humillaciones que despretijiaron su autoridad y hacían que tomase colosales proporciones la anarquía. Hizo primero reconocer á su hermano Don Alfonso como sucesor al trono, cediendo á las imposiciones de la nobleza y desconociendo los derechos de su hija. Muerto Don Alfonso á los quince años de edad, se hizo por las mismas imposiciones, el pacto llamado de los Toros de Guisando, en que fué reconocida su hermana Doña Isabel con derecho á la sucesion del trono pretendiendo salvar su autoridad, con una claúsula por la cual esta no se casaría sin asentimiento del monarca. Todos estos resultados venian precedidos de intrigas, asonadas y crímenes.
  • 52. Llegó el descontento al extremo de quererse destronar al monarca para levantar á Doña Isabel, como ya una faccion había proclamado á Don Alfonso, pero la futura soberana de España tuvo la discrecion de no prestarse al movimiento. El matrimonio de la simpática princesa con su primo el infante de Aragon, Don Fernando, Rey de Sicilia, es un idilio que pocas veces ocurre en la crónica de los reinos. Don Enrique pretendió que la princesa se casase primero con el principe de Francia, despues con Pedro Giron, altivo y rebelde noble que puso esa condicion á su sometimiento y por último con el Rey de Portugal. La princesa resistió con energía todas estas imposiciones porque amaba á Don Fernando de Aragon y solo con él consentiría en un enlace. Para evitar las persecuciones é intrigas de la Corte hizose venir al Infante secretamente, corriendo serios peligros y con la proteccion de los nobles que le eran adictos en Castilla, celebraronse las nupcias que unian por lo pronto dos ardientes corazones y que mas tarde debian unir dos reinos, formando uno tan grande que en él jamas el sol tendria ocaso. Muerto Don Enrique IV en Diciembre de 1474 fué, en la ciudad de Segovia, proclamada Reyna de Castilla Doña Isabel, no sin que al mismo tiempo ambiciosos viniesen á disputarle el trono, so pretesto de sostener la causa de Doña Juana. La actividad que en esta lucha demostró la nueva Reyna, probó que ambicionaba ardientemente el poder y que tenia grandes aptitudes para sobrellevarlo. Doña Juana habíase esposado con Don Alfonso V Rey de Portugal y este invadió á Castilla, sostenido por los nobles adictos á ella y trabóse una guerra de sucesion que probó la impericia militar de unos y otros. Por último, vencido el Portugues, retiróse á su Corte y la infeliz Doña Juana, despues de haber sido heredera de un trono, novia de tantos ambiciosos y desposada de un Rey, concluyó por buscar la paz del alma en un Monasterio. Fallecido en Enero de 1479 el Rey de Aragon Don Juan II, fué elevado al trono Don Fernando y produjose así la unidad Española.
  • 53. En todo este movimiento vése por único actor á la casualidad. A Don Enrique sucederle debia su hija Juana y en defecto de ella, su hermano Don Alonso, jóven sensato, que apesar de su corta edad tuvo bastante carácter para rechazar mas de una infamia; hubiese sido un buen Rey y no llegó á ser sinó una esperanza frustrada sin que falten historiadores que atribuyan al veneno su prematura muerte. En tal caso Doña Isabel hubiese sido otra monja como Doña Juana ó hubiese optado por ser Reyna de Portugal, casandose con el viejo monarca que la pretendia. Entónces la union de los Reinos de Aragon y Castilla efectuado ipsofacto por su matrimonio con el Príncipe, no se hubiese realizado, sin que hubiesen tenido lugar muchos de los sucesos que vamos á referir. Prescott en la historia de los Reyes Católicos, dá al reinado de Doña Isabel un orígen electoral, cosa que en verdad no es asi, pues toda la autoridad de Doña Isabel se derivó del célebre pacto de los Toros de Guisando, infringido no obstante por la misma agraciada en la cláusula que exigía la intervencion de Don Enrique en su matrimonio. Si casualidad fué todo, pocas veces ha dado orígen á tanto bien y á tanto mal. A situacion tan espantosa, como la dejada por el reinado que caducaba, requeriase un gobierno enérgico y justo, que salvase el principio de autoridad, desconocido por la terrible anarquía que destrozaba la Península Ibera y los Reyes Católicos, que muchos y muy grandes errores debian cometer, eran no obstante justos y enérgicos. Todos los historiadores están contestes en el tétrico cuadro que ofrecía la España al morir Don Enrique. La seguridad de las personas y de las cosas era mayor entre las hordas salvajes que en sus campos y aun en sus ciudades; los mismos nobles mandaban desde sus castillos robar y asesinar á los viajeros; el feudalismo estaba en su apojeo; los tribunales por prevaricaciones escandalosas ó por miedo no servian sinó para alentar la injusticia y el crímen; la industria decaida, el comercio abatido; una crísis espantosa á causa de que cada noble acuñaba la moneda á su antojo, depreciándose
  • 54. esta al extremo de que las transaciones se hacian, como en los tiempos primitivos, por trueque ó cambio. El Clero era un poder, el único poder, la única autoridad, al extremo de que criminales vestian el hábito sin profesar para escudarse y quedar impunes. Los maestrazgos de las órdenes religioso-militares, recibian del Papa su autoridad; no se sometian al Gobierno y acumulaban grandes riquezas. En fin, si se quiere una imágen del cáos, busquese en esa época de la historia de España, sobre todo en Castilla y Andalucia. Los Reyes Católicos acometieron la tarea de domar esa anarquía y ya con rigor, ya con blandura; ya confirmando fueros y derechos á las ciudades, ya despojando á los nobles de sus derechos feudales, ya reconciliando los magnates enemistados, ya sometiendo á los que gobernaban por su cuenta incluso al altivo conde de Cádiz, ya prestigiando los tribunales de justicia, ya reformando los procedimientos y leyes civiles; en pocos años, la misma admiracion que nos ha causado el desquicio del gobierno de Don Enrique, nos asalta al ver las reformas obtenidas por los Reyes Católicos. Apesar de su energía, Doña Isabel nada hubiese conseguido sin la union del Reyno de Aragon; habiase allí refugiado lo mas sensato y patriota de la nacion Española; su constitucion liberal, su riqueza de que era emporio el puerto de Barcelona, todo eso reflejaba prestigio sobre ella y era un contrapeso poderoso; los nobles y el pueblo mismo de Castilla, sabían que en un caso dado, un ejército Aragonés vendría á apoyar á la Soberana y véase en esto una demostracion de como la anarquía, hija siempre de la desmembracion social, cesa cuando la unidad se restablece. Dos episodios citaremos para demostrar que estos Soberanos si bien dotados de grandes cualidades, no eran aptos para mejorar la situacion del Pais. Los obispados de España se proveian sin anuencia del Soberano, y si los Reyes Católicos reivindicaron ese derecho, no se descubre en ello sinó la influencia del Clero Español, interesado en esa reivindicacion porque era pospuesto por prelados de Roma. Los Reyes estaban
  • 55. sometidos á esa influencia al extremo de que el confesor de Doña Isabel, nuevamente nombrado, Fray Fernando de Talavera, cuando por primera vez fué á ejercer su ministerio, permaneció sentado para escuchar la confesion:—La costumbre es—dijo Doña Isabel—que ambos permanezcamos arrodillados.—Nó—exclamó el confesor—yo soy ministro de Dios y este su tribunal y V. A. debe permanecer de rodillas y yo sentado. La Reyna se arrodilló. Doña Isabel tenia, no hay duda grandes condiciones pero no era superior á su época, estaba muy á su nivel. La España debía permanecer siempre con los gérmenes de la anarquía, contenidos pero no extirpados; el fanatismo debia acrecentarse tanto mas cuanto mas quisiese hacerse de la religion elemento social. Es asi que el restablecimiento de la Inquisicion hizo á este poder mas irresistible que en las épocas anteriores. Algunos historiadores para disculpar á Doña Isabel dicen que fué á requisicion del Papa que se hizo este restablecimiento; no hay tal, existen aun los documentos que prueban que fué á peticion de la misma Doña Isabel que se dió la bula que debia levantar en Torquemada, el déspota, el tirano mas cruel de los tiempos pasados y futuros. Estos dos episodios prueban que, ó los Reyes Católicos no eran tales como los representa la historia, sinó crueles y sanguinarios ó que estaban tan dominados por el Clero como Don Enrique lo estaba por los nobles rebeldes. Destruido un feudalismo, levantaban otro cien veces peor; quitada á los nobles la horca y cuchillo, ponian en manos de los Inquisidores la tea para encender las hogueras del martirio. No faltan historiadores que fascinados por el prestigio de los grandes acontecimientos que la casualidad hizo producir en el reynado de Doña Isabel, quieran atenuar esta mancha, echando la culpa á la época. Nó, la moral y la justicia son eternas y no tenemos otra regla para juzgar los hechos de cualquier tiempo. No fueron menos graves otros errores cometidos por los Reyes Católicos; la expulsion de España de los Judios y de los Moros, las persecuciones inhumanas contra esos desgraciados, el saqueo de sus propiedades, son hechos
  • 56. que bastan para borrar la poca gloria que se les atribuye en la unidad de España y en el descubrimiento de América. La misma guerra contra los Moros refugiados en Granada, no se llevaba con tanto celo al principio; fué necesario que algunos nobles por si y ante si la iniciasen con la toma de Alhama, para decidir al Monarca á ponerse en campaña y en toda esa guerra cuesta discernir el fanatismo del amor patrio. Ni faltaron tampoco los estragos de la guerra cívil en este Reynado, bastando para comprobarlo que citemos el movimiento separatista que inició en Galicia el mariscal Pardo de Cela, siendo necesario que se enviase allí un ejército que sufrió un reves y que no pudo triunfar sinó á merced de una traicion por la cual, aprisionado el separatista, fué ahorcado sin piedad. Tal era la situacion en que Cristóbal Colon debia hallar á la España, agregando que los antiguos odios entre esa Nacion y Portugal habian recrudecido con la guerra de sucesion de Doña Juana, á causa de la invasion á Castilla por el Rey Don Alfonso, en proteccion de esas pretensiones.
  • 57. CAPITULO IV. Los Conventos—Llegada de Colon á el de la Rávila—Opinion de algunos autores—Colon en la Corte—Exámen de su proyecto— Su rechazo—Nuevas tentativas—Proyecto de marcha—Carta del Rey de Francia—Aceptacion de su proyecto en principio— Inconvenientes en la práctica—Aceptacion definitiva del proyecto. En aquellos tiempos de miseria y de barbarie, tropezábase frecuentemente en España y en Italia con altos muros entre los cuales se incrustaba iglesia gótica y en el interior de ese recinto hallabase almacenada la abundancia y refugiada la ilustracion, por lo general teológica, casuítica, fanática, pero á veces en una celda apartada, como un punto luminoso, se escondia bajo el hábito del fraile, un sabio ó un artista, único principio vital del porvenir, única chispa que algun dia restituyese al mundo los resplandores de la luz. Allí se absorbia el sudor de los labradores y de los artesanos distribuyéndose en cambio á los vagamundos, algunos bocados de sopa, ostentacion de caridad calculada para que se redoblasen las limosnas. A la puerta de uno de estos edificios del Monasterio de la Rávila, á corta distancia del puerto de Palos, un dia canicular en 1484 detúvose un peregrino que conducia un niño de la mano. Ni el polvo que cubria su pobre ropaje, ni la fatiga retratada en su semblante, ni el dolor que se reflejaba en sus ojos, disminuian la nobleza de su porte,—¿Que buscaba ese hombre?—¿Era acaso un mendigo?—No pedia sinó un poco de sombra para reposar y un mendrugo de pan para el niño. Habia en ese Convento una luz y con ella se descubrió lo que buscaba ese viajero en su afanosa peregrinacion; Fray Juan Perez de
  • 58. Marchena era uno de esos seres refugiados en el Convento, que vestia el hábito del fraile pero que conservaba el corazon y la inteligencia libres del fanatismo. Ver al forastero y adivinar en él todo un drama interesante, fué la concepcion feliz de un momento; sin duda pensó que tambien el Dante, algun tiempo hacia, habia buscado igual refugio en Italia. El peregrino y el fraile se miraron, se explicaron, se comprendieron. Ese humilde viajero que hallaba asi hospitalidad y apoyo, era Cristóbal Colon y el niño, su hijo Diego. Algunos historiadores modernos han querido desconocer este poético episodio, pretendiendo que Colon desembarcó en el puerto de Santa María y que fué hospedado en el Palacio del Duque de Medina-Celi, refiriéndose á un documento que no citan ni describen. Tal documento no puede ser otro que el que se refiere á las relaciones que tuvo con dicho Duque mucho despues de su llegada á España, como mas adelante lo veremos. Por otra parte no es verosímil que habiendo salido Colon de Lisboa furtivamente, despreciado por la Corte, sin influencia ni valimiento alguno, desembarcase en España con el prestigio necesario para hacerse abrir las puertas del Palacio del orgulloso Duque y encontrarlo dispuesto á servirlo. Todo en el reinado de Doña Isabel debia ser obra de la casualidad; Cristóbal Colon rechazado por el Monarca de Portugal por importuno, venia á España como vagabundo y como vagabundo llama á las puertas del monasterio de la Rávila donde halla un hombre que lo socorre y lo comprende, se encarga de la educacion del hijo, lo mune de recomendaciones y lo dirige á la Corte. Entre las recomendaciones que llevaba Colon habia una para aquel Fray Fernando de Talavera, confesor de la Reyna, de que hemos hablado ya y no podia ser mejor dirigido el pretendiente que á un hombre que hacia arrodillar á sus plantas á Isabel para oir su confesion y darle sus consejos.
  • 59. Hallábase la Corte en Córdoba y toda la atencion era absorvida por los cuidados de la guerra contra los Moros de Granada. El confesor de la Reyna apenas respondió con seca urbanidad á la recomendacion que se le hacia del marino; ignorante y tan fanático como de cortos alcances, no le sirvió como pudo haberle servido. Pero Colon estaba ya en camino y supo captarse la amistad de otras personas influyentes, entre ellas á Gheraldoni nuncio del Papa, y á su hermano Alejandro, preceptor de los hijos de los Monarcas y por intermedio de estos obtuvo una audiencia del Cardenal Mendoza que tanto valimiento tenia en la Corte que era llamado la tercer potencia. Mendoza debia ser hombre instruido, al menos de elevado espíritu, pues escuchó á Colon con atencion, lo exortó á perseverar en sus planes y obtuvo éste por su intermedio una audiencia de los Reyes. Colon era elocuente; conocia que para convencer y persuadir es menester hacer vibrar las fibras mas sensibles del corazon de su auditorio y halagar sus creencias y aun sus preocupaciones. Así pues, á los soberanos de Castilla les habló de la gloria de extender sus dominios; excitóles la avaricia con el acrecentamiento de un comercio riquísimo; pero en lo que insistió mas y con acento profético, fué en el triunfo de la fé cristiana, en la conversion de millares de idólatras y aun en el rescate del Santo Sepulcro. Es probable que Colon creyese en mucho de lo que decia, pero no hay duda que exageraba su fé y su ortodoxismo para persuadir. Su larga permanencia en Portugal le habia hecho adquirir una pronunciacion y un acento mas semejante al castellano y su trato con españoles, aun ántes de llegar á España, le permitia expresarse en ese idioma con bastante claridad y elegancia. La impresion causada en el ánimo de los Reyes fué favorable, sobre todo en Doña Isabel que era mas ambiciosa y mas accesible al entusiasmo. Pero el proyecto de Colon rozaba con puntos de la fé y dado el fanatismo de los Reyes, no podia ser aceptado sin someterlo al exámen de peritos.—Pero—¿Que peritos podrian ser en esta materia teólogos y frailes? Compuesto este tribunal de esta manera y
  • 60. presidido por el confesor de la Reyna fácil es comprender que el proyecto de Colon era de antemano condenado. Admitido á exponer y defender su idea ante el areópago ortodóxo presentósele otra ocasion de lucir su elocuencia. Esta vez expuso todas las teorías de Tolomeo y Toscanelli, para demostrar la practicabilidad del viaje y no poco le sirvió su erudicion en la Biblia para ayudarse á conciliar sus errores con los nuevos errores que profesaba. Había esta diferencia grandísima entre unos y otros errores; que los teológicos cerraban la puerta á todo descubrimiento; inmovilizaban, aletargaban, envenenaban la vida como las emanaciones de un lago sin corriente, miéntras que los errores de la ciencia impulsaban al progreso, admitian nuevas hipótesis, se encadenaban con las verdades del porvenir. Era una lucha titánica y sosteniéndola Colon era ya tan grande y tan digno de la posteridad, como si hubiese realizado ya su descubrimiento. Pasaban los meses y los años y el Consejo no expedía su dictámen. Entre tanto Colon abria su alma á dulces sentimientos y consuelos. Había trabado relacion con una noble y hermosa dama llamada Beatriz como aquella que inspiró al Dante y fruto de estos amores fué Don Fernando, que mas tarde hizose estimar por sus méritos y fué el primer historiador de las hazañas de su padre. Al fin en 1491, redoblando Colon sus instancias, obtuvo que el Consejo se expidiese, pero éste fallo le fué completamente adverso. Al recibir esta noticia, experimentó tanta amargura que, á no ser los vínculos que lo unian ya á España, la hubiera abandonado como abandonó á Lisboa. Tentativas infructuosas con algunos grandes personajes, entre ellos el Duque de Medina-Celi, lo detuvieron todavía, pero al recibir una carta del Rey de Francia que lo llamaba, resolvió partirse. Como recordará el lector, su hermano Bartolomé gestionaba en Inglaterra la admision de sus proyectos y regresando con éxito ó sin éxito, había instruido de ellos tambien al Monarca Francés que los aceptó con entusiasmo.
  • 61. Partióse pues Colon desandando aquel camino de Córdoba á la Rávila que había ántes emprendido tan lleno de esperanzas. Aquellos para quienes la vida no ha sido una contínua lucha, que no saben lo que es una esperanza salvadora que se desvanece, que no han contado con un recurso único que se pierde, aquellos que no han ido á la ilusion y vuelto al descanto por el mismo trayecto, no podrán hacerse una idea de los tristes pensamientos que asaltarían la mente de Colon. Por segunda vez llamó á las puertas del convento de la Rávila y por segunda vez Fray Juan Perez reanimó las esperanzas del marino. Consiguió que detuviese su viaje á Francia, envió á pedir una audiencia á la Reyna, de quien habia sido confesor, y una vez obtenida, marchóse á la Corte sin detenerse y aun sin esperar el dia para ponerse en marcha. Como en todos estos sucesos había algo de providencial, la carta del Monarca Francés, vino oportunamente y fué sin duda el gran argumento que empleó el de la Rávila para convencer á la Reyna. El Portugal era odiado por los Reyes y Pueblo Español, pero la Francia era mirada con recelo y emulacion, sin duda desde las guerras de Aragon y de Italia en que Franceses y Españoles se disputaban el mas rico giron de aquellos paises. Así fué que pensar en que la Francia acogería á Colon y podría gozar la gloria de su empresa, despertó los celos de Doña Isabel. Se ordenó que Colon regresase dándosele seguridad de que sería atendido y adelantándosele veinte mil maravedies para sus gastos. Llegó esta vez á la Corte nuestro héroe lujosamente vestido y con aire de triunfo y hallándose los Reyes entónces frente á los muros de Granada, allí se dirigió, llegando en el oportuno momento de ser tomada la ciudad y estarse celebrando alegremente la victoria decisiva contra los Sarracenos. Allí tuvo la satisfaccion de ver al fin de tantas peripecias aceptado, al menos en principio, la proposicion de su descubrimiento.
  • 62. Delegó la Reyna en varias personas el encargo de tratar las bases y formalizar el compromiso y otra vez Fray Fernando Talavera debia presidir el Consejo. Había éste ascendido á arzobispo de la recien reconquistada Granada, redoblado su influencia pero tambien su terquedad y su fanatismo. Entre Talavera y Colon existia una antipatia bien manifiesta y cuando oyó aquél que éste exigia ser nombrado Almirante y Virrey de las tierras que descubriese, asi como la décima parte de los productos, no pudo contenerse y exclamó: que no era mal arreglo el asegurar dignidades y riquezas sin exponerse á pérdidas. A esto contestó Colon que se comprometia á cargar con la octava parte del costo de la expedicion, obteniendo la octava parte de los beneficios. La Reyna que en este negocio era siempre de la opinion de su confesor, no se opuso al dictámen otra vez adverso á Colon, y este, ya en el año de 1492, partióse de la nueva ciudad de Santa-Fé para dirigirse á Francia como ya lo habia ántes pensado. Tenía proposiciones ventajosas del Rey de Francia y por esta razon no cedia de sus pretensiones; esto estaba previsto por él, como lo hemos dicho ántes, esto es: si sus ofertas eran acogidas por dos soberanos, aceptaría la mejor proposicion. No hay duda que prefería servir á la España porqué en ella tenía ya vínculos y afecciones, pero no eran tan poderosas que le impidiesen ir á buscar mejores condiciones. En cuanto á la Reyna había confiado á su Consejo la negociacion y sus consejeros le hacían creer que Colon cedería al fin y aceptaría ir al descubrimiento sin pedir honores y cuotas de ganancias. Pero viendo la Reyna que se marchaba en verdad, envió á detenerlo por segunda vez porque no quería de manera alguna, que fuese la Francia la que tuviese la gloria de una empresa que aunque no la reputase tan colosal como resultó, creia sin embargo fuese de gran importancia. Así pues todo lo relativo á nobles trasportes de parte de Isabel y á la resolucion de vender sus alhajas si faltasen fondos para la expedicion, no es sinó fábula inventada para engrandecer á la Reyna, y hacer mas decoroso este período de la historia.
  • 63. Los fondos de la expedicion se sacaron del tesoro público de Aragon y del particular de Don Fernando. Aceptado en definitiva lo que exigia Colon, firmóse el convenio en la ciudad de Santa-Fé, en la Vega de Granada en 17 de Abril de 1492. Si no fué la Francia la iniciadora del descubrimiento de América es debido á dos nobles sentimientos que detuvieron á Colon, el amor á Doña Beatriz y la amistad de Fray Juan Perez de Marchena, sin lo cual no hubiera regresado á Córdoba á reanudar sus negociaciones. Sin que desconozcamos la grandeza del Pueblo Español, no hay duda que la Francia pudo llevar en el descubrimiento y poblacion de la América, elementos sociales mas constitutivos que los que llevó aquel Pueblo que se hallaba en esa época, en condiciones nada aparentes para la colonizacion y en el cual era constitucional la anarquía y arraigado estaba el fanatismo. Tampoco hubiéranse reproducido en las nuevas colonias de la América del Sur el odio entre Portugueses y Castellanos y las cuestiones de límites y de predominio, hubiéranse resuelto con otro espíritu, y otras consideraciones.
  • 64. CAPITULO V. Aprestos para la marcha—¡Á que poco costo adquiría la España un mundo!—Partida de la expedicion—Derrotero— Descubrimiento—Asombrosos errores—Desviacion de la brújula —Verdadero descubrimiento de Colon. Señalóse el puerto de Palos para armarse y partir la expedicion que debía lanzarse al Océano á realizar los ensueños de Colon. Dictáronse todas las providencias tendentes á facilitar la partida, y aprovechándose la obligacion en que estaban los habitantes de ese puerto de facilitar como tributo embarcaciones y gentes de mar al Estado, ordenóse el secuestro de dos embarcaciones y su correspondiente tripulacion. Los gastos de la Corona pues, debian ser bien insignificantes, reduciéndose á la compra de víveres y pago de cuatro meses adelantados á los tripulantes. ¡Á tan poco costo iba la España á adquirir un Nuevo Mundo! El armamento del tercer buque corria por cuenta de Colon y segun afirman casi todos los historiadores, sin que sepamos la fuente de donde han sacado esto, Martin Alonso Pinzon, rico armador del mismo puerto de Palos, facilitó los fondos necesarios para tal objeto, resolviéndose él y su hermano á acompañarle en el viaje, tomando el mando de los buques que debian seguir al Almirante, nombre con el cual se designó desde entónces á Colon. De los tres buques aprestados, solo el que montaba este: la Santa Maria tenia cubierta; los otros dos: la Pinta, mandada por Martin Alonso Pinzon y la Niña por Vicente Yanez Pinzon eran carabelas, no ascendiendo todo el personal de la escuadrilla sino á ciento veinte hombres, reclutados por cierto, con indecible trabajo. El viérnes 3 de Agosto de 1492, antes de la salida del Sol, zarparon los buques que debian navegar al rumbo que Colon indicase, con la
  • 65. condicion de no tocar en las islas Azores, de Cabo Verde, costa de Guinea ó cualquier otra colonia portuguesa. Desde el primer dia de la navegacion el Almirante abrió un diario para llevar cuenta de las ocurrencias de ella, de modo que esta parte de la historia tiene fuente segura. En la introduccion de ese diario hallamos de notable que llamase á los Reyes Católicos Reyes de España y de las islas del Mar.—¿De que islas queria hablar?—La Antilla segun la creencia de la época estaba poblada: Cipango y demas islas imaginadas eran dependencias de la India y era de suponer que ese gran Kan, emperador poderoso, no había de estar muy dispuesto á ceder sus dominios á un puñado de aventureros. Tal vez Colon adivinaba la existencia de algunas tierras inhabitadas ó las suponía tan solo para excitar la codicia de los reyes; pero si se recuerda el empeño con que exigió ser nombrado Gobernador de dichas tierras, es forzoso admitir la primera de esas hipótesis. Sin embargo poca importancia acordaba á dichas tierras pues decía que el objeto principal de su viaje era llevar una embajada á aquel poderoso monarca de la India y tratar de la conversion de los infieles. En corroboracion de lo dicho, veremos como, al llegar al término de su viaje buscaba mas á aquel Monarca que las tierras incógnitas. Dejando á un lado estas dudas sigamos la narracion de su viaje. Llegada la escuadra á las Canarias, reparadas las averías de uno de los buques, corregidos los defectos de la arboladura de otro, hecha abundante provision, zarpó de la Gomera el dia 6 de Septiembre con rumbo al Sud y no al Poniente como algunos dicen. Dejemos á un lado las minuciosidades de este viaje y fijemos nuestra atencion en su derrotero y escalas para convencernos que la conducta, las disposiciones y los conceptos de Colon se ajustaban á la carta geográfica que le trasmitió Toscanelli y al sistema de longitudes que este gran hombre había, bajo la fé de Marco Polo, monstruosamente alterado. De la Gomera navegó Colon casi derecho al Sud y acercándose al Trópico de Cancer, dobló de improviso al Occidente, es decir: al rumbo hácia el cual nadie había
  • 66. navegado y conservó la misma direccion hasta que no le indujo á cambiarla el indicio de una tierra cercana. Con esto Colon trataba de alcanzar el paralelo que le había designado Toscanelli. Allí creía hallar despues de dos meses mas ó ménos de navegacion como le decía aquel en la segunda de sus cartas, ó la tierra incógnita de Tolomeo ó algunos de aquellos lugares, en la parte de la India, donde podría refugiarse en algun contra-tiempo imprevisto y en verdad resultó que despues de treinta y siete dias de viaje solo le faltaban cincuenta y cinco grados para completar los ciento veinte grados determinados en aquella carta. La provision de víveres que hizo, segun dice Gonzalo de Oviedo, era suficiente solo para ese tiempo. El nombre de India que Colon dió á la América y la pretension que las islas eran del mar Indiano, fué consecuencia de la promesa que le hizo Toscanelli de conducirlo directamente al Asia, á los lugares fertilísimos de toda clase de especería y piedras preciosas; por cuanto todo el que navegase al Poniente siempre encontraría esos lugares al Poniente. Así tambien el nombre Cubanacan pronunciado por los habitantes de Cuba, le hicieron creer que se hallaba en los dominios del gran Kan y la palabra Cibao repetida por los de la Española le hicieron tambien creer que había llegado á Cipango. Había dado Colon órden de conservar siempre rumbo al Occidente y de navegar hasta setecientas leguas, deteteniéndose en esa distancia pues á tal altura debia hallar tierra. De Europa á la Antilla, como lo hemos dicho, resultaban del cálculo de Toscanelli, dos mil cuatro cientos setenta y cinco millas que hacen algo menos de las setecientas leguas expresadas, luego pues la tierra que creía Colon hallar en esas inmediaciones era la Antilla de Toscanelli. El viérnes 12 de Octubre de 1492 descubrióse por la tripulacion de la escuadra la tierra Americana. Era esta tierra la isla llamada por los naturales Guanahami y por Colon, San Salvador. Aquí se nos presenta en toda su grandeza el error de Toscanelli, la temeridad de Colon y el peligro en que estuvo su flota.
  • 67. Sin las varias islas de la América que pusieron término á su viaje precisamente á la altura en que se le prometia la India, su pérdida hubiese sido segura. En el paralelo que navegó no habría visto tierra sinó cerca de la China y esta, situada por Toscanelli á ciento veinte grados de Lisboa, distaba en verdad doscientos treinta grados. Así pues, aun suponiendo que los vientos y el mar le hubiesen sido propicios en un trayecto tan largo.—¿Donde hubiera podido proveerse y como subsistir por mas de dos meses, con falta absoluta de víveres?—Cuando se considera que Colon se engañó por ciento diez grados asombra tanto riesgo y que errores tan enormes hayan sido coronados de los mas felices sucesos. En vano se ha dicho en disculpa de Toscanelli que sospechaba la existencia de un continente intermedio, ó al menos de una vasta isla entre la Europa y el Asia. Pero de tal sospecha no se observa vestigio alguno en sus cartas, escluyendo por otra parte esta hipótesis, su única y absoluta longitud de ciento veinte grados. Ciertamente lo estravió la aparente simetria de su nuevo sistema; asi se comprende que despues de haber, con el testimonio de Polo, agregado cerca de ciento diez grados de longitud á la parte conocida de la tierra, debia llegar necesariamente á disminuir la misma longitud á la parte desconocida del Océano. En este viaje habia sido Colon muy feliz; los vientos aliseos llevaron sus bajeles por un mar bonancible con deliciosa rapidez. Pero un fenómeno desconocido hasta entonces debía presentarse y dejar perplejo al Almirante. Como no era conocida la desviacion de la brújula ni se creia en otro Norte que en el Norte del Mundo, sin pensarse en la atraccion magnética que debia hacerse sentir al separarse de los paralelos septentrionales, el fenómeno tenía que ser alarmante é inesperado. Los pilotos que iban en la expedicion ocurrieron al Almirante sobresaltados para que este les explicase la causa de lo que observaban. Hallábase él tan ignorante á este respecto como ellos, pero por no desconsolarlos les dió una explicacion sofística, como
  • 68. hizo Galileo la primer vez que fué consultado respecto á la presion atmosférica sobre la columna de agua. No está el mérito de Colon en haber descubierto la América, pues jamas pensó él ni sus contemporáneos en la existencia de un nuevo continente. Las tierras incognitas se suponian agregaciones del continente Asiático y nada nuevo se creía descubrir. Pisando ya la tierra Americana, hacía esfuerzos por reducirla á las informaciones de Marco Polo. El mérito de Colon está en haberse puesto denodadamente al servicio de la ciencia tal cual se hallaba en aquellos tiempos, en haber aceptado de los sabios una teoría científica y en haberse lanzado á practicarla sin arredrarse ante la necesidad de surcar mares desconocidos y de alejarse de la tierra como nadie se habia alejado. Colon mas que la América ha descubierto el Océano; reveló el misterio de su camino y los mil viajeros que tras él se lanzaron y descubrieron mas tierra que él, no tienen tanto mérito, porque él abrió los horizontes que se creían impenetrables.
  • 69. CAPITULO VI. Divagacion por el archipiélago de las Antillas—Pérdida de la nave principal—Desercion de la Pinta—Viaje de regreso— Escala en Portugal—Felonía de Pinzon—Coincidencias favorables para la España—Célebres doctrinas respecto á las tierras de infieles—Bula de demarcacion—Triunfos de la diplomacía portuguesa. Hallábase Colon entre el Archipiélago descubierto lleno de admiracion al ver tan lujosa naturaleza. Los bosques, las praderas, los rios, los lagos, la infinita variedad de las aves, las faldas de las montañas, la suave ondulacion de las llanuras, todo brillaba con los rayos de un sol esplendoroso y la vejetacion exhalaba el perfume mas embriagador. Pero al mismo tiempo hallábase indeciso; descendia en una Isla y tornaba á las naves para visitar otra y al mismo tiempo iba designándolas con los nombres de Isabella, Española, Concepcion etc., lo que prueba que apesar de no abandonar sus creencias de hallarse en las proximidades del Asia, reconocia que aquellas islas no eran las señaladas en la carta de Toscanelli y en la que él mismo dibujó para guia de su viaje. Entretanto que asi vagaba Colon por el ancho piélago de las Antillas, dos contratiempos le sobrevinieron; uno fué la desercion de la Pinta á causa de querer su comandante Pinzon adelantar por su cuenta los descubrimientos y recoger las codiciadas riquezas. Otro de los contratiempos y el mas irreparable fué la pérdida de la Santa María, arrastrada por una corriente y encallada violentamente en un banco. Fueron inútiles los esfuerzos que se hicieron para salvarla quedando la escuadrilla privada del mejor buque.
  • 70. No decayó por esto el ánimo de Colon y aprovechó el tiempo en tomar informaciones de aquellos pacíficos y nobles habitantes de las islas para quienes habia llegado la época de la esclavitud y del martirio. Todos estaban contestes en señalar al Sud la existencia de un vasto y poderoso Imperio á cuyo Soberano obedecian millones de subditos y que poseia inmensas riquezas. Es indudable que estos indios aludian al Imperio Mejicano, pero Colon entendía que tal Soberano debia ser el Gran Kan y el Imperio, el Oriente. Mas veíase en malas condiciones para proseguir el descubrimiento, reducido á una sola carabela y rodeado de gente rebelde y mal dispuesta. Resolvióse por tanto regresar á España, dejando en la Española, Isla en la cual se hallaba el mas simpático de los caciques indios, llamado Guacanajari, un fuerte construido con los despojos de la Santa María y una guarnicion de treinta hombres. Construyóse el fuerte cerca á la ensenada que llamó de la Navidad, así como el fuerte mismo, primer ensayo de colonizacion que tan desgraciados frutos debía producir, dándose fé desde entónces de que el pueblo que descubría y poblaba la América era el que en peores condiciones se hallaba para hacerlo. En cuatro de Enero del año siguiente al descubrimiento, esto es de 1493, diose Colon á la vela sin esperar á la Pinta que creía ya perdida; un fuerte viento le hizo derribar hácia el promontorio y ensenada que llamó de Monte-Cristi. A poco de hallarse en este refugio avistó á la Pinta que venía buscando el mismo puerto. Pinzon defendió su rebeldia con pueriles excusas y aceptándolas Colon, tuvo la primera debilidad que había de serle tan funesta á él y á las colonias. Pensó que castigar al rebelde sería provocar á sus adictos y hacer tal vez imposible su regreso á España; mas de este modo quedó quebrada su autoridad y dispuestos al mal los elementos anárquicos con que contaba para sus futúras
  • 71. expediciones. Así pues, apesar de la llegada de la Pinta, persistió Colon en su designio de regresar á España. El 9 de Enero se dieron los buques á la vela dejando su refugio y poniendo rumbo al Oriente. Este viaje de regreso fué tan borrascoso como bonancible había sido el de venida. Colon creia perecer y que las noticias de su descubrimiento perecerían con él; en prevision de tan triste suceso, escribió sucinta ralacion de su viaje y con las precauciones del caso, la colocó en un tonel que abandonó á las olas y otro ejemplar hizo colocar en el castillo de popa de su buque. La Pinta se habia separado y otra vez se creyó perdida, no ya por la rebeldia de su comandante, sinó por el furor de la tempestad. En fin el 15 de Febrero se avistó tierra. Era la Isla Santa María, la mas meridional de las Azores pero á causa del temporal, no pudo la Niña dar fondo hasta el 17. Los Portugueses recibieron mal á Colon y á sus subalternos, al extremo de quererse apoderar del buque y aprisionar á estos. Esta hostilidad se atribuye por algunos á que el Rey de Portugal, en la creencia de que la expedicion de los Castellanos menoscababa sus descubrimientos, habia dado órdenes á los gobernadores de sus posesiones, que tratasen de apoderarse de los viajeros, pero la conducta que posteriormente observó el mismo Monarca, desmiente esta suposicion. El 24 de Febrero prosiguió Colon su marcha y no sin nuevos temporales y peligros consiguió el 3 de Marzo dar fondo en Rastello, cerca de la desembocadura del Tajo. De este punto escribió á los Reyes de España anunciándoles su llegada y pidió permiso á el de Portugal para llegar á Lisboa, por no ser el punto en que se hallaba seguro fondeadero. Hallábase á la sazon don Juan II con su Corte en Valparaiso á nueve leguas de Lisboa y aunque los descubrimientos de Colon le debieron causar sumo despecho por no haberlos él aprovechado, mostróse con altura y ordenó fuese aquel socorrido de todo cuanto necesitase. El cronista portugues Rui de Pina refiere que no faltaron consejeros
  • 72. que incitasen al Rey á ordenar la muerte de Colon para apoderarse de su secreto; no basta el testimonio del cronista para creerlo, pero fuese de esto lo que fuese, trató el Rey al Almirante con distinguida consideracion. Cabía en el ánimo del Monarca una sospecha y era si el descubrimiento afectaba sus posesiones Africanas, pero Colon explicó claramente que las tierras visitadas por él estaban fuera de todo lo conocido hasta la fecha y en rumbo distinto á el de los descubrimientos de los portugueses. Despues de esto se partió el Almirante para España, llegando al puerto de Palos el 15 de Marzo á medio dia. No bien habia fondeado la Niña cuando apareció la Pinta que habia sido arrojada á la costa de Cantábria y desde allí habia Pinzon escrito á los Reyes que Colon habia naufragado y que á él se debia el descubrimiento. Habia pues cometido dos injustificables felonías; su rebelion en el archipiélago de las Antillas y su impostura al llegar á España, faltas que si no se disculpan se atenúan por haber auxiliado á Colon al principio de su empresa y porque su arrepentimiento fué tal, que murió de pesadumbre. Encontrábanse los Reyes Católicos en Barcelona donde Don Fernando habia salvado de una tentativa de asesinato; acababa de firmarse el tratado de paz con la Francia en que esta cedia los condados de Rosellon y Cerdeña. Coincidia este triunfo diplomático con la conquista definitiva de las Canarias empezada por Betacourt y concluida ahora por Alfonso Fernandez de Lugo. Por último habia fallecido el marques de Cádiz y como no habia dejado sucesion, quedó la Ciudad y Puerto definitivamente anexados á la Corona. Á completar tal número de felices coincidencias llegaba pues Colon con las nuevas de su descubrimiento, cuya grandeza no era aun ni sospechada. Desde el puerto de Palos hasta Barcelona hay un trayecto regular, debiendo atravesarse por pueblos y ciudades; ese trayecto fué una marcha triunfal para Colon, que iba á caballo y precedido de las muestras de los productos, de los animales y de los indios que habian sido llevados de las tierras descubiertas. Todas las
  • 73. poblaciones salian á victoriar á aquel viajero afortunado que no hacia mucho se habia presentado como un mendigo. Ignoraba Colon, entónces en el apogeo de su gloria, cuantas amarguras tenia que sufrir y como habia de eclipsarse el brillo de su estrella. Los Reyes recibieron cariñosamente á Colon y oyeron de sus labios la relacion de sus viajes con interes y aun con entusiasmo y agregaron á sus privilegios otras mercedes, entre ellas que pudiera llevar escudo con el símbolo del descubrimiento y la inscripcion siguiente: Para Castilla y Leon Nuevo mundo halló Colon. Al mismo tiempo pensaron los Monarcas asegurar para su dominio los nuevos paises descubiertos. El Derecho de Gentes en aquel tiempo ni estaba muy adelantado ni se consultaba siempre. Entre los medios de adquirir las tradiciones romanas no ofrecian sino la conquista; pero el advenimiento de los Papas y su jurisdiccion espiritual sobre todos los reyes católicos trajo otra doctrina bien original:—Segun ella los infieles no tenian derecho á poseer dominios y cualquier principe cristiano podia desapoderar de sus tierras y sustituir á todo principe hereje. La propiedad del mundo era para los católicos, quienes podian reivindicar toda tierra de infieles y en su virtud el Papa podia distribuir las tierras como árbitro. Asi fué que Martin V y sus sucesores concedieron á la Corona de Portugal todas las tierras que se descubriesen por sus subditos desde el cabo Boyador á las Indias y los Reyes Católicos por un tratado celebrado con el Monarca Portugues en 1479, habíanse comprometido á respetar esos derechos. Ocupaba entonces el trono de San Pedro el crapuloso Borgia con el nombre de Alejandro VI. Fácil fué convencer á este de que los descubrimientos de los Castellanos tenian otro rumbo que los que habian sido asegurados á los portugueses y al fin, en Marzo de 1493, expidió una bula concediendo á la Corona de España para sus
  • 74. descubrimientos, las mismas seguridades que habian sido concedidas á Portugal. Agregóse á esta bula la célebre demarcacion por la cual se adjudicaba á la España omnes insulas et terras firmes, inventas et inveniendas, detectas et detejendas versum occidentem et meridiem. Hacíase esta demarcacion por una línea imaginaria que desde el polo Ártico bajase al Antártico, cien leguas al Occidente de las Azores y de las islas de Cabo Verde. Entretanto preparábase una segunda expedicion á las tierras descubiertas; pero los portugueses á pesar de la célebre demarcacion, estaban recelosos de ella. Empezó entónces una lucha de astucia y de intrigas en que se empleaban el cohecho y los mas viles recursos para descubrir los secretos de este negocio. La corruptora diplomacía portuguesa que debia tener digna sucesion en América, salió triunfante en este caso, con el célebre tratado de Tordesillas, celebrado en 7 de Junio de 1494, por el cual la línea divisoria se modificó, debiendo tirarse tres cientas leguas al Occidente. No se explica esta concesion que debia ser funesta á la América Española. Con esta modificacion los portugueses mas tarde alegaron derechos para ocupar el Brasil y enseñorearse de una de las mas importantes regiones de la América.