在EKS集群中创建应用

我们的应用将由很多模块组成,本节将在EKS集群中部署应用。

应用的架构如下:

Arch Diagram

Arch Diagram

组件如下:

Client

  • client.py

backend services

  • Namespace: inventory-service (inventoryService.py) -> containerPort: 5000, Port:80
  • Namespace: database-service (databaseService.py) -> containerPort: 5000, Port:80
  • Namespace: payment-service (paymentService.py) -> containerPort: 5000, Port:80
  • Namespace: authentication-service (authenticationService.py) -> containerPort: 5000, Port:80
  • Namespace: recommendation-service (recommendationService.py) -> containerPort: 5000, Port:80
  • Namespace: order-service (orderService.py) -> containerPort: 5000, Port:80
  • Namespace: analytics-service -> containerPort: 8087, Port:80
  • Namespace: otel-collector -> containerPort: 55680, Port:55680 (GRPC)
  • Namespace: data-prepper -> containerPort: 21890, Port:21890 and containerPort: 2021, Port:2021
  • Namespace: logging (fluentbit) -> containerPort: 2020, Port:2020

database/storage

  • Namespace: mysql -> containerPort: 3306, Port:3306

API访问会调用以下组件:

  • /server_request_login (authenticationService) -> /recommend (recommendationService) -> /read_inventory (inventoryService) -> /get_inventory (databaseService) -> mysql
  • /checkout (paymentService) -> /update_inventory (inventoryService) -> /update_item (databaseService) -> mysql
  • /update_order (orderService) -> /add_item_to_cart or /remove_item_from_cart (databaseService) -> mysql
  • /clear_order (orderService) -> /cart_empty (databaseService) -> mysql
  • /get_order (orderService) -> /get_cart (databaseService) -> mysql
  • /pay_order (orderService) -> /cart_sold (databaseService) -> mysql

每个API调用都会访问 /logs (analytics-service)

构建微服务

我们将把上面的应用都打成镜像,然后部署在EKS集群里:

cloud9 03

进入Cloud 9页面:

image-20230826211934760

在cloud 9控制台上执行以下命令安装必要的工具:

# Disable Cloud9 AWS Manage Temporary Credentials
aws cloud9 update-environment  --environment-id $C9_PID --managed-credentials-action DISABLE
rm -vf ${HOME}/.aws/credentials

# Install jq
sudo yum -y -q install jq

# Update awscli
pip install --user --upgrade awscli

# Install awscli v2
curl -O "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" 
unzip -o awscli-exe-linux-x86_64.zip
sudo ./aws/install
rm awscli-exe-linux-x86_64.zip

# Install bash-completion
sudo yum -y install jq gettext bash-completion moreutils

# Configure AWS CLI
export ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account)
export TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
export AWS_REGION=$(curl -H "X-aws-ec2-metadata-token: $TOKEN" -s 169.254.169.254/latest/dynamic/instance-identity/document | jq -r '.region')
export AZS=($(aws ec2 describe-availability-zones --query 'AvailabilityZones[].ZoneName' --output text --region $AWS_REGION))

echo "export ACCOUNT_ID=${ACCOUNT_ID}" | tee -a ~/.bash_profile
echo "export AWS_REGION=${AWS_REGION}" | tee -a ~/.bash_profile
echo "export AZS=(${AZS[@]})" | tee -a ~/.bash_profile
aws configure set default.region ${AWS_REGION}
aws configure get default.region

# Configure network variables (VPC, Priv/Pub-subnets)
export MyVPC=$(aws ec2 describe-vpcs  --query 'Vpcs[*].[VpcId]' --filters "Name=tag-key,Values=IsUsedForDeploy" --output text)
export PrivateSubnet1=$(aws ec2 describe-subnets --query 'Subnets[*].[SubnetId]' --filters "Name=tag-value,Values=VPC-Observability Private Subnet (AZ1)" --output text)
export PrivateSubnet2=$(aws ec2 describe-subnets --query 'Subnets[*].[SubnetId]' --filters "Name=tag-value,Values=VPC-Observability Private Subnet (AZ2)" --output text)
export PrivateSubnet3=$(aws ec2 describe-subnets --query 'Subnets[*].[SubnetId]' --filters "Name=tag-value,Values=VPC-Observability Private Subnet (AZ3)" --output text)
export PublicSubnet1=$(aws ec2 describe-subnets --query 'Subnets[*].[SubnetId]' --filters "Name=tag-value,Values=VPC-Observability Public Subnet (AZ1)" --output text)
export PublicSubnet2=$(aws ec2 describe-subnets --query 'Subnets[*].[SubnetId]' --filters "Name=tag-value,Values=VPC-Observability Public Subnet (AZ2)" --output text)
export PublicSubnet3=$(aws ec2 describe-subnets --query 'Subnets[*].[SubnetId]' --filters "Name=tag-value,Values=VPC-Observability Public Subnet (AZ3)" --output text)

echo "export MyVPC=${MyVPC}" | tee -a ~/.bash_profile
echo "export PrivateSubnet1=${PrivateSubnet1}" | tee -a ~/.bash_profile
echo "export PrivateSubnet2=${PrivateSubnet2}" | tee -a ~/.bash_profile
echo "export PrivateSubnet3=${PrivateSubnet3}" | tee -a ~/.bash_profile
echo "export PublicSubnet1=${PublicSubnet1}" | tee -a ~/.bash_profile
echo "export PublicSubnet2=${PublicSubnet2}" | tee -a ~/.bash_profile
echo "export PublicSubnet3=${PublicSubnet3}" | tee -a ~/.bash_profile

# Configure EKS Cluster kubeconfig
aws eks list-clusters --output text |
awk '{print $2}' |                                                                                                                                                                                                           
while read line;
do aws eks --region $AWS_REGION update-kubeconfig --name $line;                                                                                                                                                               
done

# Reload bash_profile
source ~/.bash_profile

安装完上面工具后,下载仓库:

# Download lab repository
git clone https://github.com/aws-samples/observability-with-amazon-opensearch

进入代码库,将每个应用打成docker镜像:

cd observability-with-amazon-opensearch/

# Env Vars
export ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account)
export TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")
export AWS_REGION=$(curl -H "X-aws-ec2-metadata-token: $TOKEN" -s 169.254.169.254/latest/dynamic/instance-identity/document | jq -r '.region')

aws ecr get-login-password --region ${AWS_REGION} | docker login --username AWS --password-stdin ${ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com


push_images_ecr() {
    echo "Building ${2} ..."
    service_folder=$1
    repo_name=$2
    cd sample-apps/$service_folder/
    echo $PWD # Check Directory
    docker build -t $repo_name .
    docker tag $repo_name:latest ${ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/$repo_name:latest
    docker push ${ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/$repo_name:latest
    sed -i -e "s/__ACCOUNT_ID__/${ACCOUNT_ID}/g" kubernetes/01-deployment.yaml
    sed -i -e "s/__AWS_REGION__/${AWS_REGION}/g" kubernetes/01-deployment.yaml
    rm -rf kubernetes/01-deployment.yaml-e
    cd ../..
}

push_images_ecr '04-analytics-service' 'analytics-service'
push_images_ecr '05-databaseService' 'database-service'
push_images_ecr '06-orderService' 'order-service'
push_images_ecr '07-inventoryService' 'inventory-service'
push_images_ecr '08-paymentService' 'payment-service'
push_images_ecr '09-recommendationService' 'recommendation-service'
push_images_ecr '10-authenticationService' 'authentication-service'
push_images_ecr '11-client' 'client-service'

image-20230826171504139

OpenTelemetry数据发送到OpenSearch的配置

在本实验中,我们将收集log和trace数据,发送到OpenSearch集群:

  • Log数据是使用FluentBit收集的,FluentBit将log数据发送到Data Prepper,再发往OpenSearch
  • Trace数据是使用AWS Distro for OpenTelemetry收集的,也是发往Data Prepper

Data Prepper可以用于自定义数据收集管道,所以我们要定义它的source和sink:

dataprepper 01

打开~/environment/observability-with-amazon-opensearch/sample-apps/01-data-prepper/kubernetes/data-prepper.yaml ,更改如下三个部分:

image-20230826171421515

OpenSearch的URL,帐号名及密码在上一节中的output中都能找到:

image-20230826171541333

部署微服务

执行以下命令,部署每个微服务的yaml文件:

apply_manifests() {
    service_folder=$1
    cd sample-apps/$service_folder/
    echo $PWD # Check Directory
    kubectl apply -f kubernetes/
    cd ../..
}

apply_manifests '00-fluentBit'
apply_manifests '01-data-prepper'
apply_manifests '02-otel-collector'
apply_manifests '03-mysql'
apply_manifests '04-analytics-service'
apply_manifests '05-databaseService'
apply_manifests '06-orderService'
apply_manifests '07-inventoryService'
apply_manifests '08-paymentService'
apply_manifests '09-recommendationService'
apply_manifests '10-authenticationService'
apply_manifests '11-client'

image-20230826171901395

观察每个pod状态是否运行正常:

watch -n 10 kubectl get pods --all-namespaces

image-20230826172000849

其中WEB页面的入口通过LoadBalancer类型的service来暴露:

 kubectl get svc -nclient-service | awk '{print $4}' | tail -n1

image-20230826172058845