INDUSTRY WATCH
need, it cannot be ensured that a single
or even a group of associated containers
will always run on one particular server.
The container management solution
must therefore have service discovery
functions available to it, so that
associated containers can also be found
by other services, regardless of whether
they are on-premise or running in a
public cloud.
5. Scaling applications
and infrastructure
When it comes to scaling – a process
to be supported by the container
management system – there are two
different types:
• Scaling of container instances with
the application itself: In the event
of peak loading, it must be ensured,
for example, that an administrator,
using the container management
solution, can manually launch a
larger number of container instances
to cover current needs. To achieve
dynamic scaling, an automatic
mechanism that works with stored
metrics is appropriate. In this respect,
the administrators can specify that, if
a particular CPU load occurs, storage
capacities are exceeded, or specific
events happen, a predetermined
number of additional container
instances are started.
• Scaling of the container
infrastructure: In this respect it
must be possible for the applications
running on the container platform to
be expanded to hundreds of instances
– for example, by extending the
container platform into a public cloud.
This is much more complex than
starting new containers on servers.
6. Providing persistent storage
Introducing microservices in application
architectures also has an impact on the
provision of storage capacity. During
packing and deployment, storage should
be provided as a microservice packed
in a container and becomes container
native storage. This means that the
74
INTELLIGENTCIO
management of persistent storage
(container native storage) for application
containers is also a task for the container
management solution. Using the Red
Hat OpenShift Container Platform, for
example, infrastructure administrators
can provide application containers and
persistent container native storage, which
manages the Kubernetes orchestration
framework. The Kubernetes Persistent
Volume (PV) Framework provisions a
pool of application containers, which run
on distributed servers, with persistent
storage. Using Persistent Volume Claims
(PVCs), developers are able to request
PV resources, without having to have
extensive information on the underlying
storage infrastructure. Hat. An industry standard has therefore
been created, which is also valued by
enterprises in all sectors that use Linux
containers for development. Particularly
because so many users and software
producers use Linux containers, a highly
dynamic market has developed that
follows the principles of open source
software. Increasingly, enterprises are
adapting a microservice architecture and
supplying container-based applications.
This creates new requirements that have
to be implemented as quickly as possible
in the form of new functionalities. This
would not be possible following the closed
source model and with only a single
software vendor. The same also applies to
container management solutions.
Managed using a container
management solution, container
native storage should support the
dynamic provision of different
storage types such as block, file,
and object storage, and multi-tier
storage via quality-of-service labels.
Furthermore, persistent storage
improves the user experience in the
operation of stateful and stateless
applications. For administrators, it is
therefore easier to manage the use and
provision of storage for applications.
Using container native storage, IT
departments benefit from a software-
defined, highly scalable architecture,
which can be deployed in an on-premise
data centre and in public clouds, and
in many cases is more cost-effective
than traditional hardware-based or pure
cloud-based storage solutions. According to Github, around 1,000
developers from software vendors
and their customers are working on
the Kubernetes open source project,
which forms the basis for container
management in many solutions. When
supplying new releases, innovation
happens more quickly than with
proprietary software. In the latter, release
cycles of 12 to 18 months are the norm;
it’s three months for Kubernetes. Open
source container management solutions
therefore have considerable advantages
over vendor-specific solutions in terms of
innovation and agility. n
7. Open source solutions offer greater
potential in terms of innovation
Container technologies, in particular
Linux containers, have grown from a
niche product to become a popular
trend in the space of just a few years.
Linux containers based on the Docker
format have played a key role here.
The open source-based Docker format
is supported by many leading IT
companies, stretching from Amazon,
Google and Hewlett-Packard Enterprise
through to IBM, Microsoft, and Red
“SOME
CONTAINERS
ARE
INHERENTLY
DYNAMIC AND
VOLATILE,
CONTAINER
MANAGEMENT
SOLUTION
IS OF GREAT
IMPORTANCE.”
www.intelligentcio.com