• About Us
  • Privacy Policy
  • Disclaimer
  • Contact Us
TechTrendFeed
  • Home
  • Tech News
  • Cybersecurity
  • Software
  • Gaming
  • Machine Learning
  • Smart Home & IoT
No Result
View All Result
  • Home
  • Tech News
  • Cybersecurity
  • Software
  • Gaming
  • Machine Learning
  • Smart Home & IoT
No Result
View All Result
TechTrendFeed
No Result
View All Result

Dealing with Unique Configurations and Related Templates

Admin by Admin
September 1, 2025
Home Software
Share on FacebookShare on Twitter


Collection Overview

This text is Half 2.3 of a multi-part sequence: “Improvement of system configuration administration.”

The whole sequence:

  1. Introduction
  2. Migration finish evolution
    1. Working with secrets and techniques, IaC, and deserializing knowledge in Go
    2. Constructing the CLI and API
    3. Dealing with unique configurations and related templates
  3. Efficiency consideration
  4. Abstract and reflections

Unique Hosts Configuration

Goal and Necessities

The minimal configuration unit for the brand new SCM is a bunch group that performs the same operate within the infrastructure. In different phrases, we are able to outline this as a task for the hosts. Every function has an setting that describes the host’s belonging to a devoted set for various functions. For instance, a task known as backend can have three environments: prod, stage, and dev within the venture myproj. Consequently, these names outline the hostgroup:

  • myproj-dev-backend
  • myproj-stage-backend
  • myproj-prod-backend

Every host group consists of a sure variety of hosts, which may generally embrace only one host. Nonetheless, an essential precept that has lengthy been established is that we can not configure objects smaller than a hostgroup utilizing the brand new SCM. This strategy is logical, as host teams are used to allow a number of hosts for horizontal scaling and guarantee excessive availability for every service. Generally, such hosts share the same configuration, and we operated beneath the paradigm that if we would have liked to configure just one host with no hostgroup, we have been doubtless doing one thing fallacious.

Nonetheless, we ultimately concluded that there are situations the place extra detailed configurations are mandatory than these relevant to a bunch group. There are additionally many instances the place we have to apply configurations to a number of hosts inside a specific hostgroup. A particular case of this would possibly contain making use of configurations in a canary method to at least one or a couple of hosts to reduce the affect on the remainder of the hostgroup.

Within the early phases of growing the brand new SCM, we used a workaround involving stopping the SCM agent or locking updates on some hosts. This allowed us to check deployments on smaller subsets of the host group earlier than making use of the configuration to the complete group. Nonetheless, this strategy has a number of apparent points:

  • Testing subsets of servers prevents them from receiving new configuration updates, affecting not simply the half that’s at the moment being examined.
  • Testing subsets of servers stops checking the verification of the state of the configuration. For example, the SCM agent verifies that the mandatory companies are operating.
  • These handbook operations don’t enhance general automation.

In accordance with the above, the next necessities for unique configuration have been established:

  1. The flexibility to create configurations per host or for a couple of hosts
  2. The flexibility to create particular configuration blocks that don’t intervene with different modules
  3. The flexibility to regulate unique configurations remotely to satisfy CI/CD wants

Doable Methods to Implement This

After we began analyzing potential methods to implement unique configurations, we confronted a key query: the place ought to we retailer the configurations? Primarily, we had two choices.

Firstly, it may be saved as a easy file, much like a hostgroup YAML file. This file ought to have a better precedence in comparison with the group configuration. Nonetheless, this strategy shouldn’t be very helpful as a result of it doesn’t deal with the third requirement. It can’t be used from CI with out workarounds. Though we in the end didn’t select this feature, it grew to become possible to make use of it not directly later, after enabling templating for YAML recordsdata:

{{- if eq .IP "10.10.10.10" }}
configuration for this host solely
{{- else }}
configuration for different hosts in hostgroup
{{- finish }}

Secondly, we thought of utilizing an API router for unique configuration management. On this case, we are able to management configurations remotely with correct authorization. For the reason that API makes use of Consul to retailer knowledge, the configuration acquired by the API may be saved in Consul by the SCM, which may then make the most of these keys within the scheduler to construct the entire host configuration. This answer glad all necessities.

In follow, implementing this was comparatively simple — we added this logic on the CLI stage of the deployment device, which processes requests for particular person hosts. The API saves this knowledge in a devoted path: unique/host/{ip}. The scheduler then connects to this path when merging configurations, permitting for various configurations for every host. The merged configuration of hosts takes priority, and any beforehand declared configurations may be overwritten by these particular settings.

We started utilizing this configuration to implement rollout deployment logic much like that of Kubernetes for non-Docker tasks. The entire logic was built-in on the CLI stage, the place the device acknowledges the variety of hosts in hostgroups, performs deployments throughout the desired variety of hosts, and waits for them to run.

Implementation within the Code

A route api/v3/unique has been added to the API that gives a CRUD interface. So as to add an unique configuration, ship an HTTP POST request with the next construction:

kind ExclusivePostData struct {
    Part string   `json:"part"`
    Hosts   []string `json:"hosts"`
    Ttl     int      `json:"ttl"`
    Knowledge    string   `json:"knowledge"`
    Instantly   bool     `json:"instantly"`
}

The place: part is the context identify for the ensuing JSON despatched to the chosen hosts. knowledge is the JSON containing the configuration knowledge. hosts are the chosen hosts. ttl is the overall time to retailer this key in Consul. instantly specifies whether or not to deploy instantly utilizing the push strategy.

As soon as the API router has acquired all the mandatory knowledge, it saves the important thing in Consul:

consul:~$ consul kv get -recurse unique/10.9.2.116/
unique/10.9.2.116/packages:{"iftop":{}}

As talked about earlier, we would have liked to replace the configuration generator. We added one other merge to counterpoint this per-host configuration:

// merge configuration tree with consul unique knowledge
err = mergo.Map(&DefaultConf, ConsulExclusive, mergo.WithOverride, mergo.WithAppendSlice)
if err != nil {
    logger.GenConfLog.Println("mergo Map failed with DefaultConf and ConsulExclusive")
    return map[string]interface{}{}, err
}

The mergo.WithAppendSlice flag is especially helpful on this case, because it permits us so as to add info to a slice as a substitute of changing it.

Two strategies, GET and DELETE, have been applied for creating and deleting unique configurations.

Easy methods to Use This in Follow

The unique command has been added to the CLI device, enabling it to function with the unique API route on the SCM API.

~$ cli unique --help
NAME:
   cli unique - Get/push/delete unique configuration for host(s)

USAGE:
   cli unique command [command options] [arguments...]

COMMANDS:
   push     push unique configuration for host(s)
   get      get present unique configuration for host(s)
   delete   delete unique configuration for host(s)
   assist, h  Reveals a listing of instructions or assist for one command

One instance of how unique configurations may be useful is in canary deployments. The next script was applied to be used in CI:

  1. Sending an unique configuration to the SCM Nginx module to take away one upstream host from the load balancing pool
  2. Sending an unique configuration to the SCM Docker module to replace the Docker picture on the host
  3. Utilizing numerous checkers to check the appliance for updates and well being
  4. Eradicating the unique configuration from the SCM after which enabling site visitors for the up to date backend
  5. Repeating this for every host within the subset

Disadvantages of Utilizing Unique Configurations

The primary drawback of unique configurations is their opacity; customers could overlook that they’ve been created. Whereas there’s a TTL performance that permits configurations to revert after a specified time, the use case for TTL is primarily appropriate for testing.

Metrics and Logs

Pull SCM brokers can’t present detailed info when functioning in methods aside from by logs and metrics. We applied completely different loggers at each the agent and API ranges. This enchancment enhanced observability and debugging through the growth of our SCM and stays helpful now that energetic growth is completed.

Many loggers write to a number of recordsdata primarily based on their space. Any references to the file module are logged in ‘file.log’, service-related logs go to ‘service.log’, and package-related logs are written to ‘package deal.log’. What in regards to the different modules? Sure, every module has its personal designated log file, similar to:

  • aerospike.log
  • nginx.log
  • clickhouse.log
  • and many others.

On the API stage, configuration mergers write logs to their respective recordsdata. Equally, on the SCM agent stage, parsers additionally log to the related recordsdata. The implementation for that is fairly easy:

func LogInit() {
    PackagesLog = CreateLog(conf.LogPackagesPath, "packages")
    ServicesLog = CreateLog(conf.LogServicesPath, "companies")
    VaultLog = CreateLog(conf.LogVaultPath, "vault")
    AerospikeLog = CreateLog(conf.LogAerospiketPath, "aerospike")
    PKILog = CreateLog(conf.LogPKIPath, "pki")
    DiskLog = CreateLog(conf.LogDiskPath, "disk")
    UserLog = CreateLog(conf.LogUserPath, "consumer")
}

func CreateLog(LogPath string, Part string) (*log.Logger){
    file, err := os.OpenFile(Path, os.O_CREATE|os.O_WRONLY|os.O_APPEND, 0660)
    if err != nil {
        log.Fatalln("Didn't open log file", Path, ":", err)
    }
    retLogger := log.New(file, ": ", log.Ldate|log.Ltime|log.Lshortfile)

    SyslogPrefix := os.Getenv("SYSLOG_PREFIX")
    SyslogAddr := os.Getenv("SYSLOG_ENDPOINT")
    SyslogProto := os.Getenv("SYSLOG_PROTO")
    if SyslogPrefix != "" && Part != "" {
        TagName := SyslogPrefix + "-" + Part

        if SyslogAddr != "" {
            if SyslogProto == "" {
                SyslogProto = "udp"
            }

            syslogger, err := syslog.Dial(SyslogProto, SyslogAddr, syslog.LOG_INFO, TagName)
            if err != nil {
                log.Fatalln(err)
            }

            retLogger.SetOutput(syslogger)
        } else {
            syslogger, err := syslog.New(syslog.LOG_INFO, TagName)
            if err != nil {
                log.Fatalln(err)
            }

            retLogger.SetOutput(syslogger)
        }
    }

    return retLogger
}

Syslog serves in its place approach to monitor points throughout the infrastructure. Syslog messages are acquired by the native rsyslog and despatched to ElasticSearch.

Essentially the most difficult side is logging the variations in recordsdata on account of probably delicate info. These logs are saved solely on the filesystem in ‘recordsdata.log’, with permissions set to 0600, accessible solely to the foundation consumer.

With the push scheme, it’s potential to return HTTP responses indicating the variations to the consumer that initiated the deployment:

func FilesParser(ApiResponse map[string]interface{}, ClientResponse map[string]interface{}) {
    FilesMap, Resp, ActiveParser := ParserGetData(ApiResponse, ClientResponse, "recordsdata")
...
    for FileName, FileOptions := vary FilesMap {
...
        GenFile, err := GenTmpFileName(FileName)
        if err != nil {
            logger.FilesLog.Println("Can't write non permanent file:", GenFile, "with error", err)
            Resp[FileName] = map[string]interface{}{
                "state":            "error",
                "error":            err.Error(),
            }
            metrics.FileError.Inc()
            proceed
        }

        err = ioutil.WriteFile(GenFile, knowledge, filemode)
        if err != nil {
            Resp[FileName] = map[string]interface{}{
                "state":            "error",
                "error":            err.Error(),
            }
            logger.FilesLog.Println("Can't write to file:", FileName, ":", err)
            metrics.FileError.Inc()
            proceed
        }

        err, end result = CompareAndCopyFile(FileName, GenFile, filemode, FileUser, FileGroup) {
        if err == nil {
            if end result {
                Resp[FileName] = map[string]interface{}{
                    "state":            "modified",
                    "diff":             CalculatedDiff,
                }
                metrics.FileDeployed.Inc()
            }
        } else {
            Resp[FileName] = map[string]interface{}{
                "state":            "error",
                "error":            err.Error(),
            }
            metrics.FileError.Inc()
            proceed
        }
    }
...

This demonstrates a functioning suggestions system that gives responses to the push brokers. Just like Ansible or SaltStack, this method allows the power to watch the outcomes of deployments on hosts and spotlight errors. This instance additionally reveals how the metrics work. We’ve got quite a lot of metrics; some operate solely on the agent stage, whereas others function on the API/scheduler stage.

Relating to the metrics, we measure the variety of service restarts and reloads, file modifications, tried package deal installations, and extra. Consequently, we are able to monitor infrastructure-level points associated to package deal managers and deployment failures. For instance, the next graph illustrates issues with package deal installations throughout the infrastructure:

Problems with package installations within the infrastructure

One other instance measures the final restarts of companies over the month attributable to our SCM agent:

Last restarts of services over the month

We even have metrics on the API stage that measure:

  • The delta time between the beginning of cache technology for all hosts and its completion
  • Connection timeouts, refusals, and different points with knowledge sources
  • Cache misses when the agent requests recent configurations
  • Replace the speed of the cache
  • Newly found hosts within the infrastructure

Related Templates

In our infrastructure, there should not many causes to make use of the embrace instruction in templates or on the software program stage. For many instances, configurations from templates may be generated robotically, and many people don’t make modifications to this. A monofile is a more sensible choice on this state of affairs, as managing one file is less complicated than coping with a number of recordsdata in SCM. For instance, if you wish to carry a file to many servers after which resolve to delete it, you would need to create a activity to delete that file and subsequently take away its context. In some instances, folks could overlook to do that and solely delete the file from the SCM repository, leaving it unmanaged by the SCM.

With our templating performance, it’s pointless to create many file consists of, as we’re assured that our SCM will generate the configuration for any service regardless. Nonetheless, there are some approaches the place this strategy could fail. For instance, there are entities with complicated configurations which can be tough to template. Some software program presents many extra choices than may be outlined by the template, and a few of these choices could also be unstructured. For example, configuring iptables or nftables can have a posh construction.

One other space of concern is net servers. We’ve got discovered that each Nginx and Envoy provide a wealthy set of choices, making template creation for these configurations problematic. Moreover, whereas firewall configurations will not be too giant and will not require many recordsdata, net servers can describe many tasks in a single configuration file. Such configurations can develop to 1 megabyte and even 10MB or extra. Manually finding a selected a part of the configuration inside such a big file is cumbersome, making consists of from the templater important on this case.

As talked about earlier, utilizing Nginx-level consists of shouldn’t be comfy for us as a result of it could result in a lack of management by SCM if a file is deleted. We needed to create templates on the SCM stage and deploy them as a single composed file to the server.

To attain this, the Go module textual content/template supplies the aptitude to go recordsdata by a New methodology with the template identify and file contents.

func FileTemplateWithIncludes(ApiResopnse map[string]interface{}, templateFileName string, templatesDirs []string) (string, error) {
    templateDirsWithTemplates := make(map[string][]string)
    for _, dir := vary templatesDirs {
        dirPath := conf.LConf.FilesDir + "/knowledge/" + dir
        templateDirsWithTemplates[dir] = GetFilesFromDirsBySuffix([]string{dirPath}, conf.LConf.TemplateSuffix)
    }
    dataMainFile, err := ioutil.ReadFile(templateFileName)
    if err != nil {
        return string(""), err
    }

    tmpl, err := template.New(templateFileName).Funcs(sprig.TxtFuncMap()).Parse(string(dataMainFile))
    if err != nil {
        return string(""), err
    }
    for relativeTemplateDir, templatesSlice := vary templateDirsWithTemplates {
        for _, currtentTemplatePath := vary templatesSlice {
            knowledge, err := ioutil.ReadFile(currtentTemplatePath)
            if err != nil {
                proceed
            }
            if !strings.HasSuffix(relativeTemplateDir, "https://dzone.com/") {
                relativeTemplateDir = relativeTemplateDir + "https://dzone.com/"
            }
            templateName := relativeTemplateDir + filepath.Base(currtentTemplatePath)
            _, err = tmpl.New(templateName).Funcs(sprig.TxtFuncMap()).Parse(string(knowledge))
            if err != nil {
                proceed
            }
        }
    }
    templateBuffer := new(bytes.Buffer)
    err = tmpl.ExecuteTemplate(templateBuffer, templateFileName, ApiResopnse)
    if err != nil {
        return string(""), err
    }
    templateBytes, err := ioutil.ReadAll(templateBuffer)
    if err != nil {
        return string(""), err
    }
    file = string(templateBytes)
    return file, nil
}

Nonetheless, this modifications how we deal with template recordsdata. Beforehand, we templated recordsdata on the agent stage, that means that the templates have been carried to the server as-is, and the file supervisor would template them at that stage. Neither the API nor the scheduler carried out templating at their stage, which distributed the workload throughout the infrastructure. This scheme doesn’t work for consists of, because it requires all mandatory template recordsdata to be current on the server the place we name the templater. Solely the API has entry to such recordsdata.

We now have an implementation wherein the tactic for templating easy templates or templates with consists of is set by a JSON key.

        } else if Fdata.From != "" {
    if Fdata.Template == "goInclude" {
        GoInclude(DefaultConf, FName)
    } else if Fdata.From.(string) != "" {
        filesPath := FilesDir + "/knowledge/" + Fdata.From
        Fdata.Datadata, err = ioutil.ReadFile(filesPath)
        if err != nil {
            logger.FilesLog.Println("file learn error", FileName, err)
        }
    }
}

Since not many hosts make the most of such giant configurations, this doesn’t impose a big load on our SCM’s API. If it does current issues sooner or later, we might implement transferring a specified listing with templates to the agent and reintroduce configuration templating in a distributed method.

Instance of Making a New Module

To simplify the method of making new modules for brand spanking new group members, we developed a dummy handler that gives a “spherical cow” instance of utilization relevant in lots of instances. It seems to be like this:

var DummyLog *log.Logger

func DummyWaitHealthcheck(Dummy DummyType, ApiResponse map[string]interface{}) bool {
    // test for Dummy service is operating
    return true
}

kind DummyType struct {
    State string `json:"state"`
    PackageName string `json:"package_name"`
    ServiceName string `json:"service_name"`
}

func DummyParser(ApiResponse map[string]interface{}, ClientResponse map[string]interface{}) {
    dummyData, Resp, ActiveParser := widespread.ParserGetData(ApiResponse, ClientResponse, "dummy")
    if !ActiveParser {
        return
    }

    if DummyLog == nil {
        DummyLog = logger.CreateLog("dummy")
    }

    var dummy DummyType
    err := mapstructure.WeakDecode(dummyData, &dummy)
    if err != nil {
        DummyLog.Println("Err whereas decode map with mapstructure. Err:", err)
    }


    DummyFlagName := "dummy.service"

    var Hostgroup string
    if ApiResponse["Hostgroup"] != nil {
        Hostgroup = ApiResponse["Hostgroup"].(string)
    }

    if widespread.GetFlag(DummyFlagName) {
        LockKey := Hostgroup + "https://dzone.com/" + DummyFlagName
        LockRestartKey := "restart-" + DummyFlagName

        if widespread.SharedSelfLock(LockKey, "0", ApiResponse["IP"].(string)) {
            DummyLog.Println("widespread.SharedSelfLock set okay")
            if !widespread.GetFlag(LockRestartKey) {
                DummyLog.Println("name RollingRestart")
                widespread.SetFlag(LockRestartKey)
                widespread.DaemonReload()
                widespread.ServiceRestart(DummyFlagName)
            }
        } else {
            DummyLog.Println("No deploy on account of discover locks in consul:", LockKey)
        }

        DummyLog.Println("test native flag", LockRestartKey)
        if widespread.GetFlag(LockRestartKey) {
            DummyLog.Println("my flag set")
            if DummyWaitHealthcheck(dummy, ApiResponse) {
                widespread.SharedUnlock(LockKey)
                widespread.DelFlag(LockRestartKey)
                widespread.DelFlag(DummyFlagName)
            }
        }
        Resp["status"] = "deploying"
    } else {
        Resp["status"] = "no modifications"
    }
}

func DummyMerger(ApiResponse map[string]interface{}) {
    dummyData, ActiveParser := widespread.MergerGetData(ApiResponse, "dummy")
    if !ActiveParser {
        return
    }

    if DummyLog == nil {
        DummyLog = logger.CreateLog("dummy")
    }

    var dummy DummyType
    err := mapstructure.Decode(dummyData, &dummy)
    if err != nil {
        DummyLog.Println("Err whereas decode map with mapstructure. Err:", err)
    }

    DummyService := "dummy.service"
    if dummy.ServiceName != "" {
        DummyService = dummy.ServiceName
    }
    if dummy.State != "" {
        widespread.APISvcSetState(ApiResponse, DummyService, dummy.State)
    } else {
        widespread.APISvcSetState(ApiResponse, DummyService, "operating")
    }

    DummyPackage := "dummy"
    if dummy.PackageName != "" {
        DummyPackage = dummy.PackageName
    }
    widespread.APIPackagesAdd(ApiResponse, DummyPackage, "", "", []string{}, []string{}, []string{})

    Envs := map[string]interface{}{
        "DUMMY_SERVICE": "true",
        "DUMMY_HOST_ID": "1",
    }
    widespread.UsersAdd(ApiResponse, "dummy", Envs, "", "", "", "", 0, []string{}, "", false)
    widespread.DirectoryAdd(ApiResponse, "/var/log/dummy/", "0755", "dummy", "no person")

    widespread.FileAdd(ApiResponse, false, "/and many others/dummy.conf", "dummy/dummy.conf", "go", "current", "root", "root", "", []string{}, []string{}, []string{"dummy.service"}, []string{})

    Url := "http://localhost:1112"

    AlligatorAddAggregate(ApiResponse, "jmx", Url, []string{})
}

By merely copying and pasting, discovering and changing, and making a couple of edits, they’ll create their very own distinctive module to configure another software program utilizing the offered code templates. This strategy is efficient — many individuals who started growing our SCM began with this expertise. Nonetheless, we nonetheless have too little SCM documentation, however we’ll work on bettering it.

Tags: ConfigurationsExclusiveHandlingTemplates
Admin

Admin

Next Post
The “Tremendous Weight:” How Even a Single Parameter can Decide a Giant Language Mannequin’s Habits

The "Tremendous Weight:" How Even a Single Parameter can Decide a Giant Language Mannequin's Habits

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Trending.

Safety Amplified: Audio’s Affect Speaks Volumes About Preventive Safety

Safety Amplified: Audio’s Affect Speaks Volumes About Preventive Safety

May 18, 2025
Reconeyez Launches New Web site | SDM Journal

Reconeyez Launches New Web site | SDM Journal

May 15, 2025
Apollo joins the Works With House Assistant Program

Apollo joins the Works With House Assistant Program

May 17, 2025
Discover Vibrant Spring 2025 Kitchen Decor Colours and Equipment – Chefio

Discover Vibrant Spring 2025 Kitchen Decor Colours and Equipment – Chefio

May 17, 2025
Flip Your Toilet Right into a Good Oasis

Flip Your Toilet Right into a Good Oasis

May 15, 2025

TechTrendFeed

Welcome to TechTrendFeed, your go-to source for the latest news and insights from the world of technology. Our mission is to bring you the most relevant and up-to-date information on everything tech-related, from machine learning and artificial intelligence to cybersecurity, gaming, and the exciting world of smart home technology and IoT.

Categories

  • Cybersecurity
  • Gaming
  • Machine Learning
  • Smart Home & IoT
  • Software
  • Tech News

Recent News

Verlog: A Multi-turn RL framework for LLM brokers – Machine Studying Weblog | ML@CMU

Verlog: A Multi-turn RL framework for LLM brokers – Machine Studying Weblog | ML@CMU

September 18, 2025
Hashgraph vs Blockchain: Hedera Hashgraph Defined

Hashgraph vs Blockchain: Hedera Hashgraph Defined

September 18, 2025
  • About Us
  • Privacy Policy
  • Disclaimer
  • Contact Us

© 2025 https://techtrendfeed.com/ - All Rights Reserved

No Result
View All Result
  • Home
  • Tech News
  • Cybersecurity
  • Software
  • Gaming
  • Machine Learning
  • Smart Home & IoT

© 2025 https://techtrendfeed.com/ - All Rights Reserved