Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
  • Operations


    48 stories in this category

      0
      Votes

      DevOps vs SRE

      medium.com - DevOps is a set of practices that aims to minimize the software development lifecycle and speed up the delivery of higher-quality software…
      Continue reading on Medium »
      0
      Votes

      AWS Gateway LoadBalancer: A Load Balancer that we deserve

      iamabhishek-dubey.medium.com - Nowadays, LoadBalancing is one of the basic needs for the application systems to perform optimally while considering some important…
      Continue reading on Medium »
      0
      Votes

      Run Apache APISIX on Microsoft Azure Container Instance

      dev.to - Introduction
      Apache APISIX is an open-source Microservice API gateway and platform designed for managing microservices requests of high availability, fault tolerance, and distributed system. You can install Apache APISIX by the different methods (Docker, Helm, or RPM) and run it in the various public cloud providers because of its cloud-native behavior. In this post, you will learn how easily run Apache APISIX API Gateway in Azure Container Instances with multiple containers (Apisix and etcd) straight from Docker CLI.
      ☝️Alternatively, there are a bunch of options you can also deploy APISIX in Azure:
      Azure Kubernetes Service, if you want a complete orchestration solution and if you need to manage your deployment and infrastructure.
      Azure Service Fabric for containers and services orchestration.
      Azure Compute, to build your own solution.
      In this walkthrough, you will
      ✔️ Create an Azure resource group.
      ✔️ Configure Azure Container Instances.
      ✔️ Create an Azure context for Docker to offload Apisix and etcd containers execution to ACI.
      ✔️ Get Apache APISIX example source code for Docker from GitHub.
      ✔️ Modify Docker compose file there.
      ✔️ Setup volumes in Azure Storage Account with Azure File Share.
      ✔️ Add APISIX config files to Azure File Share.
      ✔️ Bring up APISIX in Azure Container Instances.
      ✔️ Verify APISIX running.
      ✔️ Cleanup after you finish.
      Prerequisites
      ➡️ Azure subscription - create a free account before you begin.
      ➡️ Azure CLI - you must have Azure CLI installed on your local computer hence we are going to use Azure CLI to interact with all Azure resources. See Install the Azure CLI how to set up.
      ➡️ Docker Desktop - you need also Docker desktop installed locally to complete this tutorial. It is available for Windows or macOS. Or install the Docker ACI Integration CLI for Linux.
      We use Docker Compose to define and deploy two containers for Apisix and etcd as a container group in Azure Container Instances.
      💁Run containers in Azure Container Instances on-demand when you develop cloud-native apps like APISIX with Docker and you want to switch seamlessly from local development to cloud deployment. This excellent capability is enabled by integration between Docker and Azure
      Create a resource group
      Before you create and manage your APISIX container instance, you need a resource group to deploy it to. A resource group is a logical collection into which all Azure resources are deployed and managed.
      Create a resource group with the az group create command. In the following example, a resource group named apisix is created in the centralus region:

      az group create --name apisix --location centralus Configure Azure Container Instances
      You can now run the Azure CLI with the az command from any command-line interface.
      We use Docker commands to run containers in Azure Container Instances, so the first thing we need to log into Azure by running the following command:

      docker login azure You can also log in using a Service Principal (SP). Provide the id and password of the SP using --client-id and --client-secret arguments when calling docker login azure.
      Create an Azure context
      Once logged in, you will create an ACI context by running docker context create aci. The context is responsible for associating requests issued by Docker CLI to nodes or clusters in ACI. For example, to create a context called apisixacicontext you can use:

      docker context create aci apisixacicontext docker context create is an interactive command. It guides you through the process of configuring a new Docker context for our existing Azure resource group.
      Run docker context ls to confirm that you added the ACI context to your Docker contexts:

      docker context ls Output:

      Next, change to the ACI context. Subsequent Docker commands run in this context.

      docker context use apisixacicontext Get Apache APISIX from GitHub
      In this demo, we are using Apache APISIX Docker repo and it contains an example docker-compose.yaml file and other config files that show how to start APISIX using docker compose. We try out this example:
      Use git to clone the repository and cd into the example folder.

      git clone 'https://github.com/apache/apisix-docker' cd apisix-docker/example Modify Docker compose file
      Next, open docker-compose.yaml in a text editor. The example docker compose file defines several services: apisix-dashboard, apisix, etcd, web1, web2, prometheus, and grafana:
      apisix-dashboard, apisix, etcd are the essential services required for starting apisix-dashboard, apisix, and etcd. web1, web2 are sample backend services used for testing purposes. They use nginx-alpine image. prometheus, grafana are services used for exposing metrics of the running services. For the sake of simplicity, we are going to use and run only APISIX and etcd services in this demo. We can simply do the following changes by removing other services and defining volumes like etcd-data and apisix-data. In the next step, we use Azure file share as volumes.

      version: "3" services: apisix: image: apache/apisix:2.13.1-alpine restart: always volumes: - apisix-data:/apisix/conf/ depends_on: - etcd ports: - "9080:9080/tcp" - "9091:9091/tcp" - "9443:9443/tcp" - "9092:9092/tcp" networks: apisix: etcd: image: bitnami/etcd:3.4.15 restart: always volumes: - etcd-data:/bitnami/etcd environment: ETCD_ENABLE_V2: "true" ALLOW_NONE_AUTHENTICATION: "yes" ETCD_ADVERTISE_CLIENT_URLS: "http://0.0.0.0:2379" ETCD_LISTEN_CLIENT_URLS: "http://0.0.0.0:2379" ports: - "2379:2379/tcp" networks: apisix: networks: apisix: driver: bridge volumes: etcd-data: driver: azure_file driver_opts: share_name: etcdshare storage_account_name: apisixstorage apisix-data: driver: azure_file driver_opts: share_name: apisixshare storage_account_name: apisixstorage Setup volumes using Azure file share
      Apache APISIX has to persist etcd state and mount external configuration files like /apisix_conf/conf.yaml (defines the configs for apisix) in the repo folder onto the containers. You can store persistent data outside of the container-filesystem in ACI using Azure file share since Azure Container Instances are stateless. If the container is restarted, crashes, or stops, all of its state is lost.
      ℹ️ More about how to mount an Azure file share in Azure Container Instances
      As you may notice, we declared two volumes in docker-compose.yaml file and set the driver to azure_file. Before using an Azure file share with Azure Container Instances, you must create a new Azure Storage account to host the file share and add a file share to it. To create Azure storage with the name, for example, apisixstorage :

      az storage account create --resource-group apisix --name apisixstorage --location centralus --sku Standard_LRS Run the following two commands to create two Azure file shares combined with the required Azure Storage Account respectively using docker volume create.
      For the first volume apisixshare:

      docker volume create apisixshare --storage-account apisixstorage For the second volume etcdshare:

      docker volume create etcdshare --storage-account apisixstorage You can see created the storage with two files shares in the Azure portal too.

      Having the volume in place, we can run stateful APISIX in ACI as shown in the next section.
      Add APISIX config files to Azure File Share
      Now we need to upload manually Apache APISIX config files to Azure File Share. You can simply use the Azure portal to do so. Or you can always use az storage file upload Azure CLI command.
      Find the File shares from the navigation bar of apisixstorage storage we created in the previous step and select apisixshare file-share to open. The fileshare panel opens.
      In the menu at the top, select Upload. The Upload files panel opens. Download and add all files including directories from Apache APISIX conf folder. The final list of config files in apisixshare file-share should match Apache APISIX conf folder. Similar to the output below:
      ... volumes: - apisix-data:/apisix/conf/ ... Behind the scene, ACI copies all the above config files from apisixshare file-share to /apisix/conf/ folder in Linux APISIX container.

      Bring up APISIX in ACI
      Finally, now we can deploy APISIX with etcd to Azure Container Instances. Execute docker compose up to create the container group in Azure Container Instances.

      docker compose up Wait until the container group is deployed. Then, you can also verify container instances are created in the Azure portal.

      Next, you run docker ps to see the running containers and the IP address assigned to the container group.

      To see the logs of the APISIX, run the docker logs command. For example:

      docker logs example_apisix Sample output:

      ... 2022/06/11 19:02:45 [warn] 211#211: *4 [lua] plugin.lua:223: load_stream(): new plugins: {"limit-conn":true,"ip-restriction":true,"mqtt-proxy":true}, context: ... Verify Apache APISIX running
      To verify if Apache APISIX is running in the cloud, we run the below curl command and check the response from APISIX's REST Admin API. You need to replace ACI_PUBLIC_IP_ADDRESS to your container instance group's the public IP address or FQDN.

      curl "http://{ACI_PUBLIC_IP_ADDRESS}:9080/apisix/admin/services/" -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' The response indicates that APISIX is running successfully:

      { "count":0, "action":"get", "node":{ "key":"/apisix/services", "nodes":[], "dir":true } } Here we go, Apache APISIX is up and running in Azure Container Instance group and responding to your requests 👏💪.
      Troubleshoot😕
      When you are requesting Apache APISIX Admin API, you may get 403 Forbidden HTTP status error. The reason is that your client IP address might be not whitelisted in the APISIX config file. The REST Admin API to control Apache APISIX, which only allows 127.0.0.1 access by default, you can modify the allow_admin field in conf/config.yaml to specify a list of IPs that are allowed to call the Admin API. Also, note that the Admin API uses key auth to verify the identity of the caller. The admin_key field in conf/config.yaml needs to be modified before deployment to ensure security.

      allow_admin: # http://nginx.org/en/docs/http/ngx_http_access_module.html#allow - 0.0.0.0/0 # We need to restrict ip access rules for security. 0.0.0.0/0 is for test. - YOUR_IP_ADDRESS admin_key: - name: "admin" key: YOUR_ADMIN_API_KEY role: admin # admin: manage all configuration data Cleanup after you finish
      When you finish trying the application, stop the application and containers with docker compose down inside the apisix-docker/compose folder.

      docker compose down This command deletes all containers apisix and etcd in Azure Container Instances.
      You can also remove the Docker context created during this demo, use the docker context rm apisixacicontext command after switching back to the default context:

      docker context use default Then, you run

      docker context rm apisixacicontext Conclusion
      Up to now, we learn how to deploy Apache APISIX to Azure Cloud with Docker Compose to switch from running a multi-container APISIX locally to running in Azure Container Instances. From this stage, you can create a route, upstream and manage the traffic to your backend services with the available built-in plugins if you want to take advantage of more APISIX's features. You can provision other services with APISIX Docker compose like prometheus, grafana as well.
      Recommended content
      ➔ Watch Video Tutorial Getting Started with Apache APISIX
      ➔ Read the blog post Overview of Apache APISIX API Gateway Plugins
      ➔ Read the blog post Centralized Authentication with Apache APISIX Plugins
      ➔ Read the blog post API Security with OIDC by using Apache APISIX and Microsoft Azure AD
      Community⤵️
      🙋 Join the Apache APISIX Community
      🐦 Follow us on Twitter
      📝 Find us on Slack
      📧 Mail to us with your questions
      0
      Votes

      Why Cloud and DevOps are better together

      medium.com - Companies that want to increase their competitiveness should not ignore digital transformation. This is the reason why DevOps and cloud…
      Continue reading on Medium »
      0
      Votes

      DevOps: CI/CD explanation

      medium.com - CI/CD enables the tech companies to improve their products many times per day. Here’s what you need to know to do the same.
      Continue reading on Medium »
      1
      Votes

      Sysdig Introduces Sysdig Advisor to Drastically Simplify Kubernetes Troubleshooting

      devops.com - Single view of performance and events accelerates Kubernetes troubleshooting by up to 10x
      VALENCIA, SPAIN, (KubeCon + CloudNativeCon Europe), May 16, 2022 — Sysdig, the unified container and cloud security leader, announced the availability of Sysdig Advisor, a Kubernetes troubleshooting feature that consolidates and prioritizes relevant performance details in Sysdig Monitor. By providing a single view of performance and event information, Sysdig Advisor enables operations, developers, and site reliability engineering (SRE) teams to troubleshoot issues faster while decreasing the number of tools needed.
      Video: In real-time, watch troubleshooting of the same issue with and without Sysdig Advisor.
      The complexity of Kubernetes – with countless components and variables – makes it extremely difficult to debug problems and prioritize actions. Knowing how to debug, where to begin, or what to look for can be a challenge. Operations teams and SREs are often forced to pull up the command line interface and run tools like kubectl to inspect the situation and search for the root cause. With so many moving parts in Kubernetes-based applications, remediation can take hours or more, decreasing availability and impacting the end-user experience.
      With a click of a button, Sysdig Advisor presents all relevant capacity, event, alerts, and troubleshooting information. Since this information is presented in the context of Kubernetes objects, users can quickly drill down when looking for the source of a performance problem. Sysdig Advisor displays a prioritized list of issues and related live logs to surface the biggest problem areas and accelerate time to resolution.
      Key Benefits of Sysdig Advisor 
      Accelerates troubleshooting by up to 10x: Sysdig Advisor produces a prioritized list of issues, giving administrators visibility into what problems to address first. When compared to traditional methodologies, teams can resolve Kubernetes issues up to 10x faster with Sysdig Advisor by reducing the time it takes to find critical information, including capacity, utilization, event, and alert data for clusters, namespaces, workloads, and pods.  Reduces troubleshooting resource count: Sysdig Advisor reduces the dependence on a side-by-side comparison of blogs, dashboards, logs, and command line output needed to troubleshoot Kubernetes environments. The simple user interface surfaces all the important details in a single unified tool with a curated, actionable set of steps for remediation. Increases troubleshooting access without increasing security risk: Security teams are often concerned about providing broad access to command-line tools, such as kubectl. Sysdig Advisor provides quick access to the same level of information to users across the organization, without being overly permissive. “Kubernetes is complex, with countless components and variables that make it difficult to understand how, why, and when something goes wrong. Any SRE knows the pain of wading through multiple tools and getting multiple teams involved when troubleshooting an alert,” said Loris Degioanni, founder and CTO at Sysdig. “Now with Sysdig Advisor, they can efficiently debug issues and get back to work on deploying new releases.”
      The Sysdig Approach 
      Sysdig is driving the standard for unified cloud and container security so DevOps and security teams can confidently secure containers, Kubernetes, and cloud services. Sysdig offers two products, Sysdig Secure and Sysdig Monitor, and the Sysdig platform architecture underpins both products. Sysdig Monitor provides cloud and Kubernetes monitoring that is fully open source Prometheus compatible. With Sysdig Secure, teams find and prioritize software vulnerabilities, detect and respond to threats, and manage cloud configurations, permissions, and compliance. Sysdig provides a single view of risk from source to run, with no blind spots, no guesswork, no black boxes.
      Availability 
      Sysdig Advisor is available now to Sysdig Monitor users at no additional cost. Additional troubleshooting features will be introduced over the coming weeks.
      Resources
      About Sysdig
      Sysdig is driving the standard for cloud and container security. The company pioneered cloud-native runtime threat detection and response by creating Falco and Sysdig as open source standards and key building blocks of the Sysdig platform. With the platform, teams can find and prioritize software vulnerabilities, detect and respond to threats, and manage cloud configurations, permissions, and compliance. From containers and Kubernetes to cloud services, teams get a single view of risk from source to run, with no blind spots, no guesswork, no black boxes. The largest and most innovative companies around the world rely on Sysdig.
      Media Contact
      Amanda McKinney Smith
      [email protected]
      703-473-4051
      1
      Votes

      Sysdig Open Source is Extended to Secure Cloud Services

      devops.com - New integration enables any Falco plugin to be used for Sysdig OSS
      VALENCIA, SPAIN, (KubeCon + CloudNativeCon Europe), May 16, 2022 — Sysdig, the unified container and cloud security leader, announced that Sysdig open source, the incident response standard for containers, has been extended to the cloud. Using system calls, Sysdig open source (Sysdig OSS) traditionally offers deep observability into running applications, as well as file system access and network activity, which speeds incident response and troubleshooting. Teams can quickly filter information from Sysdig OSS and take action. With the announcement of this new integration, these capabilities have been extended beyond containers to any cloud environment.
      The complexity of cloud-native applications – with countless components and variables – makes it extremely difficult for security analysts and system administrators to quickly triage alerts and debug problems. Sysdig OSS captures process, file system, and network activity in real time and with a high degree of granularity. The tool, which has nearly two million downloads and 6,850 GitHub stars, surfaces everything from executed commands and file system activity to network activity. Sysdig OSS then offers advanced filtering and troubleshooting capabilities, supporting root cause analysis for security and performance issues.
      Using a new plugin framework – originally developed by the open source community for the CNCF project Falco – Sysdig extends the number of sources Sysdig OSS can be connected with to anything that generates logs or events, including Azure, Google, and AWS CloudTrail logs. Going forward, every plugin developed for Falco can also be leveraged by Sysdig OSS. Using one tool, like Sysdig OSS, to observe events from the entire cloud-native environment streamlines investigations. Using a different tool for each environment adds complexity, which makes it massively harder to troubleshoot.
      Learn more about this framework in the Sysdig OSS 0.29 new release blog
      Sysdig’s Commitment to Open Source
      Sysdig was founded as an open source company and Sysdig Secure and Sysdig Monitor were both built on an open source foundation to address the security challenges of modern cloud applications. Both projects were created by Sysdig to leverage deep visibility as a foundation for security, and they have become standards for container and cloud threat detection and incident response. Falco, which was contributed to the CNCF in 2018, is now an incubation-level hosted project with more than 45 million downloads.
×
×
  • Create New...