A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://www.ibm.com/think/topics/docker below:

What Is Docker? | IBM

Docker is an open-source platform that enables developers to build, deploy, run, update and manage containers.

Containers are standardized, executable components that combine application source code with the operating system (OS) libraries and dependencies required to run that code in any environment.

It's worth noting that when people talk about Docker, they are usually referring to Docker Engine, the runtime for building and running containers. Docker also refers to Docker, Inc.1, the company that sells the commercial version of Docker. It also relates to the Docker open source project2, to which Docker, Inc. and many other organizations and individuals contribute.

Containers simplify the development and delivery of distributed applications. They have become increasingly popular as organizations shift to cloud-native development and hybrid multicloud environments. Developers can create containers without Docker by working directly with capabilities built into Linux® and other operating systems, but Docker makes containerization faster and easier. As of this writing, Docker reported that over 20 million developers use the platform monthly3. Like other containerization technologies, including Kubernetes, Docker plays a crucial role in modern software development, specifically microservices architecture.

In contrast to the traditional monolithic approach of a large, tightly coupled application, microservices provide a cloud-native architectural framework. This framework composes a single application from many smaller, loosely coupled and independently deployable smaller components or services. Each service can be containerized using Docker, simplifying deployment and allowing teams to roll out new versions and scale applications as needed.

Containers are made possible by process isolation and virtualization capabilities built into the Linux kernel. These capabilities include control groups (Cgroups) for allocating resources among processes and namespaces for restricting a process's access or visibility into other resources or areas of the system.

Containers enable multiple application components to share the resources of a single instance of the host operating system. This sharing is similar to how a hypervisor allows multiple virtual machines (VMs) to share a single hardware server's central processing unit (CPU), memory and other resources.

Container technology offers all the functions and benefits of virtual machines (VMs), including application isolation and cost-effective scalability, plus significant other advantages:

Organizations that use containers report many other benefits including improved app quality, faster response to market changes and more.

Why use Docker and why is it so popular?

Docker is the most widely used containerization tool, with an 82.84% market share.4 Docker is so popular today that "Docker" and "containers" are used interchangeably. However, the first container-related technologies were available for years, even decades5, before Docker was publicly released as open source in 2013.

Most notably, in 2008, LinuXContainers (LXC) was implemented in the Linux kernel, fully enabling virtualization for a single instance of Linux. While LXC is still used today, newer Linux kernel technologies are available. Ubuntu, a modern, open source Linux operating system, also provides this capability. Docker lets developers access these native containerization capabilities by using simple commands and automate them through a work-saving application programming interface (API).

There are many advantages to using Docker over the older, less compatible LXC.

Today, Docker containerization also works with Microsoft Windows Server and Apple MacOS. Developers can run Docker containers on any operating system. All of the leading cloud service providers (CSPs), including Amazon Web Services (AWS), Microsoft Azure, Google Cloud Services and IBM Cloud®, offer specific services to help developers build, deploy and run applications containerized with Docker.

Docker architecture, terms and tools

Docker uses a client/server architecture. The following is a breakdown of the core components associated with Docker, along with other Docker terms and tools.

Docker host: A Docker host is a physical or virtual machine running Linux (or another Docker-Engine compatible OS).

Docker Engine: Docker engine is a client/server application consisting of the Docker daemon, a Docker API that interacts with the daemon, and a command-line interface (CLI) that talks to the daemon.

Docker daemon: Docker daemon is a service that creates and manages Docker images, by using the commands from the client. Essentially the Docker daemon serves as the control center for Docker implementation.

Docker client: The Docker client provides the CLI that accesses the Docker API (a REST API) to communicate with the Docker daemon over Unix sockets or a network interface. The client can be connected to a daemon remotely, or a developer can run the daemon and client on the same computer system.

Docker objects: Docker objects are components of a Docker deployment that help package and distribute applications. They include images, containers, networks, volumes, plug-ins and more.

Docker containers: Docker containers are the live, running instances of Docker images. While Docker images are read-only files, containers are live, ephemeral, executable content. Users can interact with them, and administrators can adjust their settings and conditions by using Docker commands.

Docker images: Docker images contain executable application source code and all the tools, libraries and dependencies the application code needs to run as a container. When a developer runs the Docker image, it becomes one instance (or multiple instances) of the container.

Building Docker images from scratch is possible, but most developers pull them down from common repositories. Developers can create multiple Docker images from a single base image and will share their stack's commonalities.

Docker images are made up of layers, and each layer corresponds to a version of the image. Whenever a developer makes changes to an image, a new top layer is created, and this top layer replaces the previous top layer as the current version of the image. Previous layers are saved for rollbacks or to be reused in other projects.

Each time a container is created from a Docker image, yet another new layer called the container layer is created. Changes made to the container, like adding or deleting files, are saved to the container layer, and these changes only exist while the container is running.

This iterative image-creation process increases overall efficiency since multiple live container instances can run from a single base image. When they do so, they use a common stack.

Docker build: Docker build is a command that has tools and features for building Docker images.

Dockerfile: Every Docker container starts with a simple text file containing instructions for how to build the Docker container image. Dockerfile automates the process of creating Docker images. It's essentially a list of CLI instructions that Docker Engine will run to assemble the image. The list of Docker commands is vast but standardized: Docker operations work the same regardless of contents, infrastructure or other environment variables.

Docker documentation: Docker documentation, or Docker docs, refers to the official Docker library of resources, manuals and guides for building containerized applications.

Docker Hub: Docker Hub6 is the public repository of Docker images, calling itself the world's largest library and community for container images7. It holds over 100,000 container images sourced from commercial software vendors, open source projects and individual developers. Docker Hub includes images produced by Docker, Inc., certified images belonging to the Docker Trusted Registry and thousands of other images.

All Docker Hub users can share their images at will. They can also download predefined base images from the Docker filesystem as a starting point for any containerization project.

Other image repositories exist, including GitHub8. GitHub is a repository hosting service well known for application development tools and as a platform that fosters collaboration and communication. Users of Docker Hub can create a repository (repo) that can hold many images. The repository can be public or private and linked to GitHub or BitBucket accounts.

Docker Desktop: Docker Desktop is an application for Mac or Windows that includes Docker Engine, Docker CLI client, Docker Compose, Kubernetes and others. It also provides access to Docker Hub.

Docker registry: A Docker registry is a scalable, open-source storage and distribution system for Docker images. It enables developers to track image versions in repositories by using tagging for identification. This tracking and identification are accomplished by using Git, a version control tool.

Docker plug-ins: Developers use plug-ins to make Docker Engine even more functional. Several Docker plugins supporting authorization, volume and network are included in the Docker Engine plug-in system; third-party plug-ins can be loaded as well.

Docker extensions: Docker extensions enable developers to use third-party tools within Docker Desktop to extend its functions. Extensions for developer tools include Kubernetes app development, security, observability and more.

Docker Compose: Developers can use Docker Compose to manage multicontainer applications, where all containers run on the same Docker host. Docker Compose creates a YAML (.YML) file that specifies which services are included in the application and can deploy and run containers with a single command. Because YAML syntax is language-agnostic, YAML files can be used in programs written in Java, Python, Ruby and many other languages.

Developers can also use Docker Compose to define persistent volumes for storage, specify base nodes and document and configure service dependencies.

Kubernetes and container orchestration

When running just a few containers, managing an application within Docker Engine, the industry's de facto runtime, is pretty simple. However, for deployments comprising thousands of containers and hundreds of services, it's nearly impossible to monitor and manage container lifecycles without a container orchestration tool.

While Docker includes its own orchestration tool (called Docker Swarm), Kubernetes is the industry standard. Other popular container orchestration platforms include Apache Mesos and Nomad.

Kubernetes is an open source container orchestration platform descended from Borg, a project developed for internal use at Google. Introduced to the public in 2014, Kubernetes schedules and automates tasks integral to managing container-based architectures, including container deployment, updates, service discovery, storage provisioning, load balancing, health monitoring and more. In 2015, Google donated Kubernetes to the Cloud Native Computing Foundation (CNCF)9, the open source, vendor-neutral hub of cloud-native computing.

In addition, the open source ecosystem of tools for Kubernetes, including IstioKnative and Tekton, enables organizations to deploy a high-productivity platform as a service (PaaS) for containerized applications. This ecosystem also provides a faster on-ramp to serverless computing.

Docker and container industry standards

Founded in 2015 after Docker donated the container image specification and runtime code runc, the Open Container Initiative (OCI)10 is a Linux Foundation project committed to creating open industry standards around the container image format and runtime. The OCI consists of leading companies, including Docker, IBM and Red Hat®. It supports innovation while helping organizations avoid vendor lock-in.

While Docker is the most well-known and highly used container technology, the broader ecosystem has standardized on containerd and other alternatives, including LXC, CRI-O, Podman and others. In 2017, Docker donated the containerd project to the CNCF. Containerd is an industry-standard container runtime that uses runc and is the core container runtime of the Docker Engine.

The complexity surrounding containerized workloads requires implementing and maintaining security controls that safeguard containers and their underlying infrastructure. Docker container security practices are designed to protect containerized applications from risks like security breaches, malware and bad actors.

Since Docker containers are isolated from each other and the host system, they have an inherent level of security by design. However, this isolation is not absolute. Docker security revolves around a holistic zero trust framework that encompasses the runtime, build and orchestration of containers.

The need for Docker and other container-related security has increased the popularity of DevSecOps. Which is a security approach that automates the integration of security practices at every phase of the software development lifecycle, spanning initial design through integration, testing, delivery and deployment. Also, Docker security best practices include third-party container security tools and solutions, including scanning and monitoring, that can detect security issues before they impact production.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4