0.为什么要写docker源码的分析
泊坞窗的概念大家都懂,但是只知道搬运工怎么用似乎并没有卵用,毕竟现在对容器技术炒得火热,但是漫山遍野都是怎么用泊坞窗,少有对搬运工的细节的表述。最完整的是“泊坞窗源码分析“一书,但是时间久远,随着搬运工版本的迅速迭代,很多地方已经对不上了。
正好在准备秋招,于是趁着自己有找工作的动力和压力,把源码撸一遍,讲的不清楚,也不细,好歹也算对得起自己,要死也要站着死。
docker daemon的初始化
搬运工首先会进行守护程序的初始化,入口在CMD / dockerd / docker.go的main()
函数,主要完成了标准输出流/错误流的设置,通过newDaemonCommand()
接口生成一个新的命令并执行。
函数的最开始有reexec.Init()
判断语句,参考计算器上的提问,大概意思是这个初始化程序只对守护进程有效,这个有了就不用docker -d
了。
func main() { if reexec.Init() { return } // Set terminal emulation based on platform as required. _, stdout, stderr := term.StdStreams() // @jhowardmsft - maybe there is a historic reason why on non-Windows, stderr is used // here. However, on Windows it makes no sense and there is no need. if runtime.GOOS == "windows" { logrus.SetOutput(stdout) } else { logrus.SetOutput(stderr) } cmd := newDaemonCommand() cmd.SetOutput(stdout) if err := cmd.Execute(); err != nil { fmt.Fprintf(stderr, "%s\n", err) os.Exit(1) } }
再到newDaemonCommand()
里看,可以看到docker采用了cobra这样一个API管理框架,并且定义了runDaemon
这个函数来运行RunE
docker daemon。那么定义的是在哪儿执行的呢?实际上在将cmd返回之后会进行Execute()
的操作,其中就执行了一系列的PreRunE
,RunE
,PostRunE
等等一系列操作,而这些操作都是在眼镜蛇的设置中注册的。
func newDaemonCommand() *cobra.Command { opts := newDaemonOptions(config.New()) cmd := &cobra.Command{ Use: "dockerd [OPTIONS]", Short: "A self-sufficient runtime for containers.", SilenceUsage: true, SilenceErrors: true, Args: cli.NoArgs, RunE: func(cmd *cobra.Command, args []string) error { opts.flags = cmd.Flags() return runDaemon(opts) }, DisableFlagsInUseLine: true, Version: fmt.Sprintf("%s, build %s", dockerversion.Version, dockerversion.GitCommit), } ... return cmd }
runDaemon
的内容非常简单,对于UNIX系统而言,只需要新建daemonCli,然后启动它。对于windows会复杂一些,暂时不介绍windows的方式。NewDaemonCli()
返回了一个DaemonCli的结构体,里面有配置信息,flag,API服务器,daemon本身和鉴权有关的一些内容。真正的建立一个daemon的过程集中在start()
中,其中初始化了守护进程的一些组件,如API服务器,路由器,注册表,pluginStore等,并通过NewDaemon()
初始化了守护进程,至此DaemonCli的内容已经初始化完毕。
func runDaemon(opts *daemonOptions) error { daemonCli := NewDaemonCli() return daemonCli.start(opts) }// DaemonCli represents the daemon CLI.type DaemonCli struct { *config.Config configFile *string flags *pflag.FlagSet api *apiserver.Server d *daemon.Daemon authzMiddleware *authorization.Middleware // authzMiddleware enables to dynamically reload the authorization plugins}func (cli *DaemonCli) start(opts *daemonOptions) (err error) { // 只贴出关键的代码 opts.SetDefaultOptions(opts.flags) loadDaemonCliConfig(opts) // 配置信息 setDefaultUmask() // Create the daemon root before we create ANY other files (PID, or migrate keys) daemon.CreateDaemonRoot(cli.Config) pidfile.New(cli.Pidfile) // process ID newAPIServerConfig(cli) // 新建API server registry.NewService(cli.Config.ServiceOptions) // 新建registry libcontainerd.New // 新建libcontainerd对象 // Notify that the API is active, but before daemon is set up. preNotifySystem() plugin.NewStore() // plugin // 重点!!!通过config, registry, containerd, pluginStore来真正创建了daemon daemon.NewDaemon(cli.Config, registryService, containerdRemote, pluginStore) validateAuthzPlugins() // 鉴权相关 startMetricsServer() createAndStartCluster() RestartSwarmContainers() newRouterOptions(cli.Config, d) // 配置router go cli.api.Wait(serveAPIWait) // APIServer开始监听、提供服务 // after the daemon is done setting up we can notify systemd api notifySystem() }
额外需要关注一下NewDaemon()
的流程,这是一个守护进程的真正创建者这段代码非常的长,不过注释挺多的,还比较容易看得出它大概在干啥整体流程。:
设置MTU
检查RootKeyLimit(和能启动的容器数量有关)
检查守护程序配置
检查网络环境(linux bridge)
检查平台是否支持(比如darwin,linux,windows等等)
检查当前系统是否满足要求
setupRemappedRoot(用户名称空间的隔离,将容器内的用户映射为宿主机上的普通用户)
设置路径名的环境变量
设置异常处理
一些底层的和堆栈有关的设置
设置的Seccomp机制(安全机制)
设置AppArmor(同样是安全机制,所以docker就是简单粗暴的把安全机制都堆在一起用?)
初始化与镜像存储相关的目录及商店,默认在在/ var / lib中/泊坞窗/容器
设置graphDriver,参考文档
设置注册服务,图层存储,图像存储,卷服务,对应镜像,卷
日志的设置
调用
NewClient()
生成一个libcontainerd实例和守护进程通信
而NewClient()
被定义在libcontainerd模块中,从代码里能够看到采用的通信方式是GRPC,一旦客户建立,就会并发的处理事件流。
func NewDaemon(config *config.Config, registryService registry.Service, containerdRemote libcontainerd.Remote, pluginStore *plugin.Store) (daemon *Daemon, err error) { setDefaultMtu(config) // Ensure that we have a correct root key limit for launching containers. if err := ModifyRootKeyLimit(); err != nil { logrus.Warnf("unable to modify root key limit, number of containers could be limited by this quota: %v", err) } // Ensure we have compatible and valid configuration options if err := verifyDaemonSettings(config); err != nil { return nil, err } // Do we have a disabled network? config.DisableBridge = isBridgeNetworkDisabled(config) // Verify the platform is supported as a daemon if !platformSupported { return nil, errSystemNotSupported } // Validate platform-specific requirements if err := checkSystem(); err != nil { return nil, err } idMappings, err := setupRemappedRoot(config) if err != nil { return nil, err } rootIDs := idMappings.RootPair() if err := setupDaemonProcess(config); err != nil { return nil, err } // set up the tmpDir to use a canonical path tmp, err := prepareTempDir(config.Root, rootIDs) if err != nil { return nil, fmt.Errorf("Unable to get the TempDir under %s: %s", config.Root, err) } realTmp, err := getRealPath(tmp) if err != nil { return nil, fmt.Errorf("Unable to get the full path to the TempDir (%s): %s", tmp, err) } if runtime.GOOS == "windows" { if _, err := os.Stat(realTmp); err != nil && os.IsNotExist(err) { if err := system.MkdirAll(realTmp, 0700, ""); err != nil { return nil, fmt.Errorf("Unable to create the TempDir (%s): %s", realTmp, err) } } os.Setenv("TEMP", realTmp) os.Setenv("TMP", realTmp) } else { os.Setenv("TMPDIR", realTmp) } d := &Daemon{ configStore: config, PluginStore: pluginStore, startupDone: make(chan struct{}), } // Ensure the daemon is properly shutdown if there is a failure during // initialization defer func() { if err != nil { if err := d.Shutdown(); err != nil { logrus.Error(err) } } }() if err := d.setGenericResources(config); err != nil { return nil, err } // set up SIGUSR1 handler on Unix-like systems, or a Win32 global event // on Windows to dump Go routine stacks stackDumpDir := config.Root if execRoot := config.GetExecRoot(); execRoot != "" { stackDumpDir = execRoot } d.setupDumpStackTrap(stackDumpDir) if err := d.setupSeccompProfile(); err != nil { return nil, err } // Set the default isolation mode (only applicable on Windows) if err := d.setDefaultIsolation(); err != nil { return nil, fmt.Errorf("error setting default isolation mode: %v", err) } if err := configureMaxThreads(config); err != nil { logrus.Warnf("Failed to configure golang's threads limit: %v", err) } if err := ensureDefaultAppArmorProfile(); err != nil { logrus.Errorf(err.Error()) } daemonRepo := filepath.Join(config.Root, "containers") if err := idtools.MkdirAllAndChown(daemonRepo, 0700, rootIDs); err != nil { return nil, err } // Create the directory where we'll store the runtime scripts (i.e. in // order to support runtimeArgs) daemonRuntimes := filepath.Join(config.Root, "runtimes") if err := system.MkdirAll(daemonRuntimes, 0700, ""); err != nil { return nil, err } if err := d.loadRuntimes(); err != nil { return nil, err } if runtime.GOOS == "windows" { if err := system.MkdirAll(filepath.Join(config.Root, "credentialspecs"), 0, ""); err != nil { return nil, err } } // On Windows we don't support the environment variable, or a user supplied graphdriver // as Windows has no choice in terms of which graphdrivers to use. It's a case of // running Windows containers on Windows - windowsfilter, running Linux containers on Windows, // lcow. Unix platforms however run a single graphdriver for all containers, and it can // be set through an environment variable, a daemon start parameter, or chosen through // initialization of the layerstore through driver priority order for example. d.graphDrivers = make(map[string]string) layerStores := make(map[string]layer.Store) if runtime.GOOS == "windows" { d.graphDrivers[runtime.GOOS] = "windowsfilter" if system.LCOWSupported() { d.graphDrivers["linux"] = "lcow" } } else { driverName := os.Getenv("DOCKER_DRIVER") if driverName == "" { driverName = config.GraphDriver } else { logrus.Infof("Setting the storage driver from the $DOCKER_DRIVER environment variable (%s)", driverName) } d.graphDrivers[runtime.GOOS] = driverName // May still be empty. Layerstore init determines instead. } d.RegistryService = registryService logger.RegisterPluginGetter(d.PluginStore) metricsSockPath, err := d.listenMetricsSock() if err != nil { return nil, err } registerMetricsPluginCallback(d.PluginStore, metricsSockPath) createPluginExec := func(m *plugin.Manager) (plugin.Executor, error) { return pluginexec.New(getPluginExecRoot(config.Root), containerdRemote, m) } // Plugin system initialization should happen before restore. Do not change order. d.pluginManager, err = plugin.NewManager(plugin.ManagerConfig{ Root: filepath.Join(config.Root, "plugins"), ExecRoot: getPluginExecRoot(config.Root), Store: d.PluginStore, CreateExecutor: createPluginExec, RegistryService: registryService, LiveRestoreEnabled: config.LiveRestoreEnabled, LogPluginEvent: d.LogPluginEvent, // todo: make private AuthzMiddleware: config.AuthzMiddleware, }) if err != nil { return nil, errors.Wrap(err, "couldn't create plugin manager") } if err := d.setupDefaultLogConfig(); err != nil { return nil, err } for operatingSystem, gd := range d.graphDrivers { layerStores[operatingSystem], err = layer.NewStoreFromOptions(layer.StoreOptions{ Root: config.Root, MetadataStorePathTemplate: filepath.Join(config.Root, "image", "%s", "layerdb"), GraphDriver: gd, GraphDriverOptions: config.GraphOptions, IDMappings: idMappings, PluginGetter: d.PluginStore, ExperimentalEnabled: config.Experimental, OS: operatingSystem, }) if err != nil { return nil, err } } // As layerstore initialization may set the driver for os := range d.graphDrivers { d.graphDrivers[os] = layerStores[os].DriverName() } // Configure and validate the kernels security support. Note this is a Linux/FreeBSD // operation only, so it is safe to pass *just* the runtime OS graphdriver. if err := configureKernelSecuritySupport(config, d.graphDrivers[runtime.GOOS]); err != nil { return nil, err } imageRoot := filepath.Join(config.Root, "image", d.graphDrivers[runtime.GOOS]) ifs, err := image.NewFSStoreBackend(filepath.Join(imageRoot, "imagedb")) if err != nil { return nil, err } lgrMap := make(map[string]image.LayerGetReleaser) for os, ls := range layerStores { lgrMap[os] = ls } imageStore, err := image.NewImageStore(ifs, lgrMap) if err != nil { return nil, err } d.volumes, err = volumesservice.NewVolumeService(config.Root, d.PluginStore, rootIDs, d) if err != nil { return nil, err } trustKey, err := loadOrCreateTrustKey(config.TrustKeyPath) if err != nil { return nil, err } trustDir := filepath.Join(config.Root, "trust") if err := system.MkdirAll(trustDir, 0700, ""); err != nil { return nil, err } // We have a single tag/reference store for the daemon globally. However, it's // stored under the graphdriver. On host platforms which only support a single // container OS, but multiple selectable graphdrivers, this means depending on which // graphdriver is chosen, the global reference store is under there. For // platforms which support multiple container operating systems, this is slightly // more problematic as where does the global ref store get located? Fortunately, // for Windows, which is currently the only daemon supporting multiple container // operating systems, the list of graphdrivers available isn't user configurable. // For backwards compatibility, we just put it under the windowsfilter // directory regardless. refStoreLocation := filepath.Join(imageRoot, `repositories.json`) rs, err := refstore.NewReferenceStore(refStoreLocation) if err != nil { return nil, fmt.Errorf("Couldn't create reference store repository: %s", err) } distributionMetadataStore, err := dmetadata.NewFSMetadataStore(filepath.Join(imageRoot, "distribution")) if err != nil { return nil, err } // No content-addressability migration on Windows as it never supported pre-CA if runtime.GOOS != "windows" { migrationStart := time.Now() if err := v1.Migrate(config.Root, d.graphDrivers[runtime.GOOS], layerStores[runtime.GOOS], imageStore, rs, distributionMetadataStore); err != nil { logrus.Errorf("Graph migration failed: %q. Your old graph data was found to be too inconsistent for upgrading to content-addressable storage. Some of the old data was probably not upgraded. We recommend starting over with a clean storage directory if possible.", err) } logrus.Infof("Graph migration to content-addressability took %.2f seconds", time.Since(migrationStart).Seconds()) } // Discovery is only enabled when the daemon is launched with an address to advertise. When // initialized, the daemon is registered and we can store the discovery backend as it's read-only if err := d.initDiscovery(config); err != nil { return nil, err } sysInfo := sysinfo.New(false) // Check if Devices cgroup is mounted, it is hard requirement for container security, // on Linux. if runtime.GOOS == "linux" && !sysInfo.CgroupDevicesEnabled { return nil, errors.New("Devices cgroup isn't mounted") } d.ID = trustKey.PublicKey().KeyID() d.repository = daemonRepo d.containers = container.NewMemoryStore() if d.containersReplica, err = container.NewViewDB(); err != nil { return nil, err } d.execCommands = exec.NewStore() d.idIndex = truncindex.NewTruncIndex([]string{}) d.statsCollector = d.newStatsCollector(1 * time.Second) d.EventsService = events.New() d.root = config.Root d.idMappings = idMappings d.seccompEnabled = sysInfo.Seccomp d.apparmorEnabled = sysInfo.AppArmor d.linkIndex = newLinkIndex() // TODO: imageStore, distributionMetadataStore, and ReferenceStore are only // used above to run migration. They could be initialized in ImageService // if migration is called from daemon/images. layerStore might move as well. d.imageService = images.NewImageService(images.ImageServiceConfig{ ContainerStore: d.containers, DistributionMetadataStore: distributionMetadataStore, EventsService: d.EventsService, ImageStore: imageStore, LayerStores: layerStores, MaxConcurrentDownloads: *config.MaxConcurrentDownloads, MaxConcurrentUploads: *config.MaxConcurrentUploads, ReferenceStore: rs, RegistryService: registryService, TrustKey: trustKey, }) go d.execCommandGC() d.containerd, err = containerdRemote.NewClient(ContainersNamespace, d) if err != nil { return nil, err } if err := d.restore(); err != nil { return nil, err } close(d.startupDone) // FIXME: this method never returns an error info, _ := d.SystemInfo() engineInfo.WithValues( dockerversion.Version, dockerversion.GitCommit, info.Architecture, info.Driver, info.KernelVersion, info.OperatingSystem, info.OSType, info.ID, ).Set(1) engineCpus.Set(float64(info.NCPU)) engineMemory.Set(float64(info.MemTotal)) gd := "" for os, driver := range d.graphDrivers { if len(gd) > 0 { gd += ", " } gd += driver if len(d.graphDrivers) > 1 { gd = fmt.Sprintf("%s (%s)", gd, os) } } logrus.WithFields(logrus.Fields{ "version": dockerversion.Version, "commit": dockerversion.GitCommit, "graphdriver(s)": gd, }).Info("Docker daemon") return d, nil}func (r *remote) NewClient(ns string, b Backend) (Client, error) { c := &client{ stateDir: r.stateDir, logger: r.logger.WithField("namespace", ns), namespace: ns, backend: b, containers: make(map[string]*container), } rclient, err := containerd.New(r.GRPC.Address, containerd.WithDefaultNamespace(ns)) if err != nil { return nil, err } c.remote = rclient go c.processEventStream(r.shutdownContext) r.Lock() r.clients = append(r.clients, c) r.Unlock() return c, nil}
至此,docker daemon初始化完毕。
docker client的初始化
发现一个坑,莫比中的代码并不全,在目录CMD /下只有dockerd文件夹,而没有搬运工文件夹。对比搬运工/搬运工-CE可以发现,后者在部件目录下包含了CLI和发动机两部分,引擎对应的是白鲸项目,CLI对应的是搬运工-CE和码头工人-EE采用的客户端的代码.docker将MOBY从码头工人的整体中分离出去,搬运工策作为独立的产品存在。而白鲸甚至没有包括码头工人客户的代码。
客户的初始化相对简单,在cli / cmd / docker / docker.go里面。许多版本之前守护进程和客户是共用一个可执行文件的,现在已经分开了,但是基本流程还是比较类似。提到过reexec只在守护程序初始化才执行,在客户端的初始化中,首先进行标准输入输出流的配置,通过之后NewDockerCli
新建客户端,再解析出命令并Execute()
,完成一个请求的生命周期。
func main() { // Set terminal emulation based on platform as required. stdin, stdout, stderr := term.StdStreams() logrus.SetOutput(stderr) dockerCli := command.NewDockerCli(stdin, stdout, stderr, contentTrustEnabled()) cmd := newDockerCommand(dockerCli) if err := cmd.Execute(); err != nil { if sterr, ok := err.(cli.StatusError); ok { if sterr.Status != "" { fmt.Fprintln(stderr, sterr.Status) } // StatusError should only be used for errors, and all errors should // have a non-zero exit status, so never exit with 0 if sterr.StatusCode == 0 { os.Exit(1) } os.Exit(sterr.StatusCode) } fmt.Fprintln(stderr, err) os.Exit(1) } }
NewDockerCli
的作用是返回一个DockerCli的实例,并设置好它的IO
// NewDockerCli返回一个具有IO输出的DockerCli实例,以及由in,out和err设置的错误流。
DockerCli实例包含的内容不多,除了配置文件外,有I / O和标准错误输出,还包含了服务器和客户端的信息以及一个客户端实例。配置文件在〜/ .docker / config.json中; APIClient包含了通用和实验两类,也符合我们对泊坞窗功能的认知。
type APIClient interface { CommonAPIClient apiClientExperimental }type DockerCli struct { configFile *configfile.ConfigFile in *InStream out *OutStream err io.Writer client client.APIClient serverInfo ServerInfo clientInfo ClientInfo contentTrust bool}
NewDockerCommand
同时生成一个可供服务器执行的泊坞窗命令。和守护进程的分析类似,定义这里的PersistentPreRunE
的英文之后在执行时调用的处理程序,它根据标志进行参数的设置,的之后dockerPreRun
也。只是参数设置的环节。真正将客户端初始化的英文在Initialize
里,它通过NewAPIClientFromFlags
中的NewClientWithOpts
真正新建了客户端,并判断是否需要启用实验性的功能,求最后调用initializeFromClient()
用平的方式判断和后台的连接是否建立。之后返回的是一个布尔类型,判断指令是否被客户端支持。简而言之,这个函数做了搬运工命令被送往服务器执行前的准备和检查工作。
另一方面,CMD的生成在之后进行,通过标志来进行CMD的设置,将CMD的输出绑定到dockerCli的输出等等,最后返回了CMD本身。
func newDockerCommand(dockerCli *command.DockerCli) *cobra.Command { opts := cliflags.NewClientOptions() var flags *pflag.FlagSet cmd := &cobra.Command{ Use: "docker [OPTIONS] COMMAND [ARG...]", Short: "A self-sufficient runtime for containers", SilenceUsage: true, SilenceErrors: true, TraverseChildren: true, Args: noArgs, PersistentPreRunE: func(cmd *cobra.Command, args []string) error { // flags must be the top-level command flags, not cmd.Flags() opts.Common.SetDefaultOptions(flags) dockerPreRun(opts) if err := dockerCli.Initialize(opts); err != nil { return err } return isSupported(cmd, dockerCli) }, Version: fmt.Sprintf("%s, build %s", cli.Version, cli.GitCommit), DisableFlagsInUseLine: true, } cli.SetupRootCommand(cmd) flags = cmd.Flags() flags.BoolP("version", "v", false, "Print version information and quit") flags.StringVar(&opts.ConfigDir, "config", cliconfig.Dir(), "Location of client config files") opts.Common.InstallFlags(flags) setFlagErrorFunc(dockerCli, cmd, flags, opts) setHelpFunc(dockerCli, cmd, flags, opts) cmd.SetOutput(dockerCli.Out()) commands.AddCommands(cmd, dockerCli) disableFlagsInUseLine(cmd) setValidateArgs(dockerCli, cmd, flags, opts) return cmd }
还有一个要注意的地方,搬运工如何知道用户的什么命令代表什么操作呢,处理这些函数的注册在SetupRootCommand()
里,会把interface.go里定义的操作注册到CMD里,这样服务器就知道什么CMD对应着什么样的处理函数。而命令最终被发送到服务器的逻辑也都包装在接口的函数中,比如ContainerCreate
中有一句serverResp, err := cli.post(ctx, "/containers/create", query, body, nil)
。简而言之,docker client在初始化的过程中为cmd注册了不同的处理程序,根据用户输入的标志生成正确的CMD,并查找到应该执行哪个处理程序,在处理程序中负责将CMD发送到服务器,而服务器收到后就由后台程序进行处理(具体步骤以后分析)。
func SetupRootCommand(rootCmd *cobra.Command) { cobra.AddTemplateFunc("hasSubCommands", hasSubCommands) cobra.AddTemplateFunc("hasManagementSubCommands", hasManagementSubCommands) cobra.AddTemplateFunc("operationSubCommands", operationSubCommands) cobra.AddTemplateFunc("managementSubCommands", managementSubCommands) cobra.AddTemplateFunc("wrappedFlagUsages", wrappedFlagUsages) rootCmd.SetUsageTemplate(usageTemplate) rootCmd.SetHelpTemplate(helpTemplate) rootCmd.SetFlagErrorFunc(FlagErrorFunc) rootCmd.SetHelpCommand(helpCommand) rootCmd.SetVersionTemplate("Docker version {{.Version}}\n") rootCmd.PersistentFlags().BoolP("help", "h", false, "Print usage") rootCmd.PersistentFlags().MarkShorthandDeprecated("help", "please use --help") rootCmd.PersistentFlags().Lookup("help").Hidden = true}
而moby中的客户这个文件夹下定义了所有docker client的API操作,常用的docker都可以找到具体实现。按照代码的说明:
您可以通过创建客户端对象并在其上调用方法来使用该库。该
客户端可以从任一环境变量来创建与NewEnvClient,或
与NewClient手动配置。
码头工人为我们提供了两种初始化客户端的方法,一种是通过环境变量,一种是通过手动配置首先看一下客户结构体的定义.client包含了下列成员:
方案:HTTP或HTTPS
host:服务器的地址
proto:客户端与服务器的协议,如unix socket等
addr:客户端的地址
基本路径
client:真正收发消息的客户端
版本:server的版本号,兼容性?
customHTTPHeaders:用户配置的HTTP头
manualOverride:用户自己设置版时设为true
新建客户端的API有两种,的英文分别NewEnvClient()
状语从句:NewClient()
,但是可以看到第一种已经作废,这两种高层的API都将调用NewClientWithOpts()
,从而将新建客户端的API统一。为具体用FromEnv
电子杂志环境变量配置客户端。支持的环境变量包括四种,分别是:
支持的环境变量:
DOCKER_HOST将URL设置为docker服务器。
DOCKER_API_VERSION设置要达到的API版本,留空以获取最新信息。
DOCKER_CERT_PATH从中加载TLS证书。
DOCKER_TLS_VERIFY启用或禁用TLS验证,默认情况下已关闭。
在NewClientWithOpts
中,了调用defaultHTTPClient
来初始化客户端,再根据输入的变长参数将配置覆盖。
// Deprecated: use NewClientWithOpts(FromEnv)func NewEnvClient() (*Client, error) { return NewClientWithOpts(FromEnv) }func NewClient(host string, version string, client *http.Client, httpHeaders map[string]string) (*Client, error) { return NewClientWithOpts(WithHost(host), WithVersion(version), WithHTTPClient(client), WithHTTPHeaders(httpHeaders)) }func NewClientWithOpts(ops ...func(*Client) error) (*Client, error) { client, err := defaultHTTPClient(DefaultDockerHost) if err != nil { return nil, err } c := &Client{ host: DefaultDockerHost, version: api.DefaultVersion, scheme: "http", client: client, proto: defaultProto, addr: defaultAddr, } for _, op := range ops { if err := op(c); err != nil { return nil, err } } if _, ok := c.client.Transport.(http.RoundTripper); !ok { return nil, fmt.Errorf("unable to verify TLS configuration, invalid transport %v", c.client.Transport) } tlsConfig := resolveTLSConfig(c.client.Transport) if tlsConfig != nil { // TODO(stevvooe): This isn't really the right way to write clients in Go. // `NewClient` should probably only take an `*http.Client` and work from there. // Unfortunately, the model of having a host-ish/url-thingy as the connection // string has us confusing protocol and transport layers. We continue doing // this to avoid breaking existing clients but this should be addressed. c.scheme = "https" } return c, nil}
作者:lovenashbest
链接:https://www.jianshu.com/p/2519ac06aa1d