Kiro: Transforming DevOps with AI Agents

You are a DevOps specialist juggling continuous integration and deployment (CI/CD) pipelines, infrastructure provisioning, and developer productivity services in an Amazon Web Services (AWS) environment. You’ve probably wondered: “With all these generative artificial intelligence offerings, is there a service that can actually make my work more efficient?” The answer is yes. In this post, I introduce Kiro, an agentic AI development service, and show how it can save you hours on your daily automation tasks. Learn how the Kiro command line interface (CLI) transforms DevOps automation with AI-powered agents that install, configure, and deploy infrastructure using natural language commands in minutes. ...

December 16, 2025 · 10 min · 1993 words · Matheus Costa

Top Announcements of AWS re:Invent 2025

🚀 Introdução O AWS re:Invent 2025 aconteceu em Las Vegas e trouxe uma série de anúncios revolucionários que vão transformar como trabalhamos com cloud computing. Como especialista em AWS, vou compartilhar os principais lançamentos e seu impacto prático. Este evento é sempre um marco para a comunidade tech, e este ano não foi diferente. Vamos explorar os anúncios mais importantes e como eles podem beneficiar seus projetos. 📊 Analytics AWS Clean Rooms - Geração de Datasets com Privacidade Treine modelos de ML em dados colaborativos sensíveis gerando datasets sintéticos que preservam padrões estatísticos enquanto protegem a privacidade individual através de níveis configuráveis de ruído e proteção contra re-identificação. ...

December 16, 2025 · 8 min · 1587 words · Matheus Costa

Redis Helper: Complete Tool for Redis Management

Redis Helper: Complete Tool for Redis Management Over the past few days, a specific demand for Redis configurations arose in my work environment. With it came the need for efficient monitoring and validation that Redis instances were functional, as well as easy access to manage instances running on other hosts. 🚀 The Real Problem As many DevOps professionals know, managing multiple Redis instances can be challenging: Scattered monitoring across different tools Lack of real-time visibility into performance Difficulty with automated backup/restore Absence of centralized security auditing Complexity in managing clusters 💡 The Solution: Redis Helper I used my real need as a foundation and, with the help of Amazon Q, developed a complete tool that addresses all these pain points in an integrated way. ...

July 30, 2025 · 4 min · 652 words · Matheus Costa

AI and Automation on AWS: Implementing Intelligent Solutions

Introduction AWS offers a complete ecosystem of AI services that allows you to implement intelligent solutions without the need for deep machine learning expertise. This guide explores how to use these services to automate processes and build smarter applications. AWS AI Services 1. Amazon Comprehend - Text Analysis Sentiment Analysis import boto3 import json def analyze_sentiment(text): """Analyze text sentiment using Comprehend""" comprehend = boto3.client('comprehend') response = comprehend.detect_sentiment( Text=text, LanguageCode='pt' ) return { 'sentiment': response['Sentiment'], 'confidence': response['SentimentScore'] } # Usage example text = "I am very satisfied with the company's service!" result = analyze_sentiment(text) print(f"Sentiment: {result['sentiment']}") print(f"Confidence: {result['confidence']}") Entity Extraction def extract_entities(text): """Extract named entities from text""" comprehend = boto3.client('comprehend') response = comprehend.detect_entities( Text=text, LanguageCode='pt' ) entities = [] for entity in response['Entities']: entities.append({ 'text': entity['Text'], 'type': entity['Type'], 'confidence': entity['Score'] }) return entities # Example text = "João Silva works at Amazon in São Paulo since 2020" entities = extract_entities(text) for entity in entities: print(f"{entity['text']} - {entity['type']} ({entity['confidence']:.2f})") 2. Amazon Rekognition - Image Analysis Object Detection def detect_objects_in_image(bucket_name, image_key): """Detect objects in an S3 image""" rekognition = boto3.client('rekognition') response = rekognition.detect_labels( Image={ 'S3Object': { 'Bucket': bucket_name, 'Name': image_key } }, MaxLabels=10, MinConfidence=80 ) objects = [] for label in response['Labels']: objects.append({ 'name': label['Name'], 'confidence': label['Confidence'], 'instances': len(label.get('Instances', [])) }) return objects Facial Recognition def detect_faces(bucket_name, image_key): """Detect faces in an image""" rekognition = boto3.client('rekognition') response = rekognition.detect_faces( Image={ 'S3Object': { 'Bucket': bucket_name, 'Name': image_key } }, Attributes=['ALL'] ) faces = [] for face in response['FaceDetails']: faces.append({ 'age_range': face['AgeRange'], 'gender': face['Gender']['Value'], 'emotions': [ { 'type': emotion['Type'], 'confidence': emotion['Confidence'] } for emotion in face['Emotions'] if emotion['Confidence'] > 50 ] }) return faces 3. Amazon Polly - Text-to-Speech def text_to_speech(text, output_bucket, output_key): """Convert text to audio using Polly""" polly = boto3.client('polly') s3 = boto3.client('s3') # Synthesize speech response = polly.synthesize_speech( Text=text, OutputFormat='mp3', VoiceId='Camila', # Brazilian Portuguese voice LanguageCode='pt-BR' ) # Save to S3 s3.put_object( Bucket=output_bucket, Key=output_key, Body=response['AudioStream'].read(), ContentType='audio/mpeg' ) return f"s3://{output_bucket}/{output_key}" # Example audio_url = text_to_speech( "Hello! This is an example of speech synthesis using Amazon Polly.", "my-audio-bucket", "speech/example.mp3" ) Practical Use Cases 1. Automatic Customer Feedback Analysis import boto3 from datetime import datetime import json class FeedbackAnalyzer: def __init__(self): self.comprehend = boto3.client('comprehend') self.dynamodb = boto3.resource('dynamodb') self.sns = boto3.client('sns') self.table = self.dynamodb.Table('customer-feedback') def process_feedback(self, feedback_text, customer_id): """Process customer feedback""" # Sentiment analysis sentiment_response = self.comprehend.detect_sentiment( Text=feedback_text, LanguageCode='pt' ) # Key topic extraction key_phrases_response = self.comprehend.detect_key_phrases( Text=feedback_text, LanguageCode='pt' ) # Prepare data for storage feedback_data = { 'feedback_id': f"{customer_id}_{int(datetime.now().timestamp())}", 'customer_id': customer_id, 'text': feedback_text, 'sentiment': sentiment_response['Sentiment'], 'sentiment_scores': sentiment_response['SentimentScore'], 'key_phrases': [ phrase['Text'] for phrase in key_phrases_response['KeyPhrases'] if phrase['Score'] > 0.8 ], 'timestamp': datetime.now().isoformat(), 'processed': True } # Save to DynamoDB self.table.put_item(Item=feedback_data) # Alert if negative feedback if sentiment_response['Sentiment'] == 'NEGATIVE': self.send_alert(feedback_data) return feedback_data def send_alert(self, feedback_data): """Send alert for negative feedback""" message = { 'alert_type': 'negative_feedback', 'customer_id': feedback_data['customer_id'], 'sentiment_score': feedback_data['sentiment_scores']['Negative'], 'key_issues': feedback_data['key_phrases'][:3], 'timestamp': feedback_data['timestamp'] } self.sns.publish( TopicArn='arn:aws:sns:region:account:customer-alerts', Message=json.dumps(message), Subject='Negative Feedback Detected' ) # Class usage analyzer = FeedbackAnalyzer() result = analyzer.process_feedback( "The product arrived defective and the customer service was terrible!", "customer_123" ) 2. Automatic Content Moderation class ContentModerator: def __init__(self): self.rekognition = boto3.client('rekognition') self.comprehend = boto3.client('comprehend') self.s3 = boto3.client('s3') def moderate_image(self, bucket_name, image_key): """Moderate image content""" # Detect inappropriate content moderation_response = self.rekognition.detect_moderation_labels( Image={ 'S3Object': { 'Bucket': bucket_name, 'Name': image_key } }, MinConfidence=60 ) inappropriate_content = [] for label in moderation_response['ModerationLabels']: inappropriate_content.append({ 'category': label['Name'], 'confidence': label['Confidence'], 'parent_category': label.get('ParentName', '') }) # Detect text in the image text_response = self.rekognition.detect_text( Image={ 'S3Object': { 'Bucket': bucket_name, 'Name': image_key } } ) detected_text = ' '.join([ text['DetectedText'] for text in text_response['TextDetections'] if text['Type'] == 'LINE' ]) # Analyze sentiment of detected text text_sentiment = None if detected_text: sentiment_response = self.comprehend.detect_sentiment( Text=detected_text, LanguageCode='pt' ) text_sentiment = sentiment_response['Sentiment'] return { 'image_key': image_key, 'inappropriate_content': inappropriate_content, 'detected_text': detected_text, 'text_sentiment': text_sentiment, 'approved': len(inappropriate_content) == 0, 'confidence_score': min([label['confidence'] for label in inappropriate_content]) if inappropriate_content else 100 } def moderate_text(self, text_content): """Moderate text content""" # Detect toxic language using Comprehend sentiment_response = self.comprehend.detect_sentiment( Text=text_content, LanguageCode='pt' ) # Prohibited words list (simplified example) prohibited_words = ['spam', 'scam', 'fraud'] contains_prohibited = any( word.lower() in text_content.lower() for word in prohibited_words ) return { 'text': text_content, 'sentiment': sentiment_response['Sentiment'], 'sentiment_scores': sentiment_response['SentimentScore'], 'contains_prohibited_words': contains_prohibited, 'approved': not contains_prohibited and sentiment_response['Sentiment'] != 'NEGATIVE' } # Usage example moderator = ContentModerator() # Moderate image image_result = moderator.moderate_image('content-bucket', 'user-uploads/image.jpg') print(f"Image approved: {image_result['approved']}") # Moderate text text_result = moderator.moderate_text("This is a normal comment about the product.") print(f"Text approved: {text_result['approved']}") 3. Intelligent Chatbot with Lex class IntelligentChatbot: def __init__(self): self.lex = boto3.client('lexv2-runtime') self.comprehend = boto3.client('comprehend') self.dynamodb = boto3.resource('dynamodb') self.conversation_table = self.dynamodb.Table('chatbot-conversations') def process_message(self, user_id, message, session_id=None): """Process user message""" if not session_id: session_id = f"{user_id}_{int(datetime.now().timestamp())}" # Analyze intent with Lex lex_response = self.lex.recognize_text( botId='your-bot-id', botAliasId='your-bot-alias-id', localeId='pt_BR', sessionId=session_id, text=message ) # Analyze message sentiment sentiment_response = self.comprehend.detect_sentiment( Text=message, LanguageCode='pt' ) # Prepare response based on intent intent_name = lex_response.get('sessionState', {}).get('intent', {}).get('name', 'Unknown') bot_response = lex_response.get('messages', [{}])[0].get('content', 'Sorry, I did not understand.') # Customize response based on sentiment if sentiment_response['Sentiment'] == 'NEGATIVE': bot_response = f"I can see you're frustrated. {bot_response} Would you like me to transfer you to a human agent?" # Save conversation conversation_data = { 'conversation_id': f"{session_id}_{int(datetime.now().timestamp())}", 'user_id': user_id, 'session_id': session_id, 'user_message': message, 'bot_response': bot_response, 'intent': intent_name, 'sentiment': sentiment_response['Sentiment'], 'confidence': lex_response.get('sessionState', {}).get('intent', {}).get('confirmationState', 'None'), 'timestamp': datetime.now().isoformat() } self.conversation_table.put_item(Item=conversation_data) return { 'response': bot_response, 'intent': intent_name, 'sentiment': sentiment_response['Sentiment'], 'session_id': session_id } def get_conversation_analytics(self, user_id): """Get conversation analytics""" response = self.conversation_table.query( IndexName='user-id-index', KeyConditionExpression='user_id = :user_id', ExpressionAttributeValues={':user_id': user_id} ) conversations = response['Items'] # Calculate metrics total_messages = len(conversations) sentiments = [conv['sentiment'] for conv in conversations] intents = [conv['intent'] for conv in conversations] return { 'total_messages': total_messages, 'sentiment_distribution': { 'positive': sentiments.count('POSITIVE'), 'negative': sentiments.count('NEGATIVE'), 'neutral': sentiments.count('NEUTRAL') }, 'top_intents': list(set(intents)), 'last_interaction': max([conv['timestamp'] for conv in conversations]) if conversations else None } # Usage example chatbot = IntelligentChatbot() # Process message response = chatbot.process_message( user_id="user_123", message="I need to cancel my order", session_id="session_456" ) print(f"Bot response: {response['response']}") print(f"Detected intent: {response['intent']}") Automation with Step Functions Document Processing Workflow { "Comment": "Automatic document processing workflow", "StartAt": "ExtractText", "States": { "ExtractText": { "Type": "Task", "Resource": "arn:aws:states:::aws-sdk:textract:startDocumentTextDetection", "Parameters": { "DocumentLocation": { "S3Object": { "Bucket.$": "$.bucket", "Name.$": "$.key" } } }, "Next": "WaitForExtraction" }, "WaitForExtraction": { "Type": "Wait", "Seconds": 10, "Next": "GetExtractionResults" }, "GetExtractionResults": { "Type": "Task", "Resource": "arn:aws:states:::aws-sdk:textract:getDocumentTextDetection", "Parameters": { "JobId.$": "$.JobId" }, "Next": "AnalyzeText" }, "AnalyzeText": { "Type": "Task", "Resource": "arn:aws:states:::lambda:invoke", "Parameters": { "FunctionName": "analyze-extracted-text", "Payload.$": "$" }, "Next": "ClassifyDocument" }, "ClassifyDocument": { "Type": "Task", "Resource": "arn:aws:states:::aws-sdk:comprehend:detectSentiment", "Parameters": { "Text.$": "$.extractedText", "LanguageCode": "pt" }, "Next": "StoreResults" }, "StoreResults": { "Type": "Task", "Resource": "arn:aws:states:::dynamodb:putItem", "Parameters": { "TableName": "processed-documents", "Item": { "documentId": {"S.$": "$.documentId"}, "extractedText": {"S.$": "$.extractedText"}, "sentiment": {"S.$": "$.Sentiment"}, "processedAt": {"S.$": "$$.State.EnteredTime"} } }, "End": true } } } Monitoring and Optimization CloudWatch Metrics for AI Services def monitor_ai_services(): """Monitor usage and performance of AI services""" cloudwatch = boto3.client('cloudwatch') # Custom metrics metrics = [ { 'MetricName': 'ComprehendRequests', 'Value': 1, 'Unit': 'Count', 'Dimensions': [ { 'Name': 'Service', 'Value': 'Comprehend' } ] }, { 'MetricName': 'RekognitionRequests', 'Value': 1, 'Unit': 'Count', 'Dimensions': [ { 'Name': 'Service', 'Value': 'Rekognition' } ] } ] cloudwatch.put_metric_data( Namespace='AI/Services', MetricData=metrics ) # Dashboard for monitoring dashboard_config = { "widgets": [ { "type": "metric", "properties": { "metrics": [ ["AI/Services", "ComprehendRequests"], ["AI/Services", "RekognitionRequests"] ], "period": 300, "stat": "Sum", "region": "us-east-1", "title": "AI Services Usage" } } ] } Conclusion AWS AI services democratize access to artificial intelligence, enabling developers to implement sophisticated solutions without deep ML expertise. The main advantages include: ...

July 16, 2025 · 7 min · 1282 words · Matheus Costa

Infrastructure as Code with Terraform on AWS: Best Practices and Automation

Introduction Infrastructure as Code (IaC) has revolutionized the way we manage cloud infrastructure. Terraform, combined with AWS, offers a powerful solution for creating, modifying, and versioning infrastructure in a declarative and reproducible way. Why Terraform + AWS? Terraform Advantages ✅ Multi-cloud - Support for multiple providers ✅ Declarative - Describes the desired state ✅ Planning - Preview changes before applying ✅ State Management - Centralized state control ✅ Modularity - Code reuse Benefits on AWS 🚀 Scalability - Infrastructure that grows with demand 🔒 Security - Integrated security controls 💰 Cost-effective - Automatic resource optimization 🔄 Automation - Automated deployment and management Terraform Project Structure Directory Organization terraform-aws-infrastructure/ ├── environments/ │ ├── dev/ │ │ ├── main.tf │ │ ├── variables.tf │ │ ├── outputs.tf │ │ └── terraform.tfvars │ ├── staging/ │ └── production/ ├── modules/ │ ├── vpc/ │ ├── ec2/ │ ├── rds/ │ ├── s3/ │ └── security-groups/ ├── shared/ │ ├── backend.tf │ └── providers.tf └── scripts/ ├── deploy.sh └── destroy.sh Base Configuration Provider Configuration # providers.tf terraform { required_version = ">= 1.0" required_providers { aws = { source = "hashicorp/aws" version = "~> 5.0" } random = { source = "hashicorp/random" version = "~> 3.1" } } backend "s3" { bucket = "terraform-state-bucket" key = "infrastructure/terraform.tfstate" region = "us-east-1" encrypt = true dynamodb_table = "terraform-locks" } } provider "aws" { region = var.aws_region default_tags { tags = { Environment = var.environment Project = var.project_name ManagedBy = "Terraform" Owner = var.owner CostCenter = var.cost_center } } } Variables Configuration # variables.tf variable "aws_region" { description = "AWS region for resources" type = string default = "us-east-1" } variable "environment" { description = "Environment name" type = string validation { condition = contains(["dev", "staging", "production"], var.environment) error_message = "Environment must be dev, staging, or production." } } variable "project_name" { description = "Name of the project" type = string } variable "vpc_cidr" { description = "CIDR block for VPC" type = string default = "10.0.0.0/16" } variable "availability_zones" { description = "List of availability zones" type = list(string) default = ["us-east-1a", "us-east-1b", "us-east-1c"] } Reusable Terraform Modules VPC Module # modules/vpc/main.tf resource "aws_vpc" "main" { cidr_block = var.cidr_block enable_dns_hostnames = true enable_dns_support = true tags = { Name = "${var.name}-vpc" } } resource "aws_internet_gateway" "main" { vpc_id = aws_vpc.main.id tags = { Name = "${var.name}-igw" } } resource "aws_subnet" "public" { count = length(var.public_subnets) vpc_id = aws_vpc.main.id cidr_block = var.public_subnets[count.index] availability_zone = var.availability_zones[count.index] map_public_ip_on_launch = true tags = { Name = "${var.name}-public-${count.index + 1}" Type = "Public" } } resource "aws_subnet" "private" { count = length(var.private_subnets) vpc_id = aws_vpc.main.id cidr_block = var.private_subnets[count.index] availability_zone = var.availability_zones[count.index] tags = { Name = "${var.name}-private-${count.index + 1}" Type = "Private" } } resource "aws_route_table" "public" { vpc_id = aws_vpc.main.id route { cidr_block = "0.0.0.0/0" gateway_id = aws_internet_gateway.main.id } tags = { Name = "${var.name}-public-rt" } } resource "aws_route_table_association" "public" { count = length(aws_subnet.public) subnet_id = aws_subnet.public[count.index].id route_table_id = aws_route_table.public.id } # NAT Gateway for private subnets resource "aws_eip" "nat" { count = var.enable_nat_gateway ? length(var.public_subnets) : 0 domain = "vpc" tags = { Name = "${var.name}-nat-eip-${count.index + 1}" } depends_on = [aws_internet_gateway.main] } resource "aws_nat_gateway" "main" { count = var.enable_nat_gateway ? length(var.public_subnets) : 0 allocation_id = aws_eip.nat[count.index].id subnet_id = aws_subnet.public[count.index].id tags = { Name = "${var.name}-nat-${count.index + 1}" } } resource "aws_route_table" "private" { count = var.enable_nat_gateway ? length(var.private_subnets) : 0 vpc_id = aws_vpc.main.id route { cidr_block = "0.0.0.0/0" nat_gateway_id = aws_nat_gateway.main[count.index].id } tags = { Name = "${var.name}-private-rt-${count.index + 1}" } } resource "aws_route_table_association" "private" { count = var.enable_nat_gateway ? length(aws_subnet.private) : 0 subnet_id = aws_subnet.private[count.index].id route_table_id = aws_route_table.private[count.index].id } Security Groups Module # modules/security-groups/main.tf resource "aws_security_group" "web" { name_prefix = "${var.name}-web-" vpc_id = var.vpc_id description = "Security group for web servers" ingress { description = "HTTP" from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } ingress { description = "HTTPS" from_port = 443 to_port = 443 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } tags = { Name = "${var.name}-web-sg" } lifecycle { create_before_destroy = true } } resource "aws_security_group" "database" { name_prefix = "${var.name}-db-" vpc_id = var.vpc_id description = "Security group for database servers" ingress { description = "MySQL/Aurora" from_port = 3306 to_port = 3306 protocol = "tcp" security_groups = [aws_security_group.web.id] } ingress { description = "PostgreSQL" from_port = 5432 to_port = 5432 protocol = "tcp" security_groups = [aws_security_group.web.id] } tags = { Name = "${var.name}-db-sg" } lifecycle { create_before_destroy = true } } EC2 Module with Auto Scaling # modules/ec2/main.tf data "aws_ami" "amazon_linux" { most_recent = true owners = ["amazon"] filter { name = "name" values = ["amzn2-ami-hvm-*-x86_64-gp2"] } } resource "aws_launch_template" "web" { name_prefix = "${var.name}-web-" image_id = data.aws_ami.amazon_linux.id instance_type = var.instance_type vpc_security_group_ids = var.security_group_ids user_data = base64encode(templatefile("${path.module}/user_data.sh", { environment = var.environment })) iam_instance_profile { name = aws_iam_instance_profile.web.name } block_device_mappings { device_name = "/dev/xvda" ebs { volume_size = var.root_volume_size volume_type = "gp3" encrypted = true } } tag_specifications { resource_type = "instance" tags = { Name = "${var.name}-web-instance" } } lifecycle { create_before_destroy = true } } resource "aws_autoscaling_group" "web" { name = "${var.name}-web-asg" vpc_zone_identifier = var.subnet_ids target_group_arns = var.target_group_arns health_check_type = "ELB" health_check_grace_period = 300 min_size = var.min_size max_size = var.max_size desired_capacity = var.desired_capacity launch_template { id = aws_launch_template.web.id version = "$Latest" } tag { key = "Name" value = "${var.name}-web-asg" propagate_at_launch = false } instance_refresh { strategy = "Rolling" preferences { min_healthy_percentage = 50 } } } # IAM Role for EC2 instances resource "aws_iam_role" "web" { name = "${var.name}-web-role" assume_role_policy = jsonencode({ Version = "2012-10-17" Statement = [ { Action = "sts:AssumeRole" Effect = "Allow" Principal = { Service = "ec2.amazonaws.com" } } ] }) } resource "aws_iam_instance_profile" "web" { name = "${var.name}-web-profile" role = aws_iam_role.web.name } resource "aws_iam_role_policy_attachment" "web_ssm" { role = aws_iam_role.web.name policy_arn = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore" } Environment Implementation Development Environment # environments/dev/main.tf module "vpc" { source = "../../modules/vpc" name = "${var.project_name}-${var.environment}" cidr_block = var.vpc_cidr availability_zones = var.availability_zones public_subnets = ["10.0.1.0/24", "10.0.2.0/24"] private_subnets = ["10.0.10.0/24", "10.0.20.0/24"] enable_nat_gateway = false # Cost savings in dev } module "security_groups" { source = "../../modules/security-groups" name = "${var.project_name}-${var.environment}" vpc_id = module.vpc.vpc_id } module "web_servers" { source = "../../modules/ec2" name = "${var.project_name}-${var.environment}" environment = var.environment instance_type = "t3.micro" min_size = 1 max_size = 2 desired_capacity = 1 subnet_ids = module.vpc.public_subnet_ids security_group_ids = [module.security_groups.web_sg_id] } Production Environment # environments/production/main.tf module "vpc" { source = "../../modules/vpc" name = "${var.project_name}-${var.environment}" cidr_block = var.vpc_cidr availability_zones = var.availability_zones public_subnets = ["10.1.1.0/24", "10.1.2.0/24", "10.1.3.0/24"] private_subnets = ["10.1.10.0/24", "10.1.20.0/24", "10.1.30.0/24"] enable_nat_gateway = true } module "security_groups" { source = "../../modules/security-groups" name = "${var.project_name}-${var.environment}" vpc_id = module.vpc.vpc_id } module "web_servers" { source = "../../modules/ec2" name = "${var.project_name}-${var.environment}" environment = var.environment instance_type = "t3.medium" min_size = 2 max_size = 10 desired_capacity = 3 subnet_ids = module.vpc.private_subnet_ids security_group_ids = [module.security_groups.web_sg_id] } module "database" { source = "../../modules/rds" name = "${var.project_name}-${var.environment}" engine = "mysql" engine_version = "8.0" instance_class = "db.t3.medium" allocated_storage = 100 subnet_ids = module.vpc.private_subnet_ids security_group_ids = [module.security_groups.database_sg_id] backup_retention = 7 multi_az = true } Automation and CI/CD GitLab CI Pipeline # .gitlab-ci.yml stages: - validate - plan - apply - destroy variables: TF_ROOT: ${CI_PROJECT_DIR} TF_ADDRESS: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${CI_ENVIRONMENT_NAME} cache: key: "${TF_ROOT}" paths: - ${TF_ROOT}/.terraform before_script: - cd ${TF_ROOT}/environments/${CI_ENVIRONMENT_NAME} - terraform --version - terraform init -backend-config="address=${TF_ADDRESS}" -backend-config="lock_address=${TF_ADDRESS}/lock" -backend-config="unlock_address=${TF_ADDRESS}/lock" -backend-config="username=${CI_USERNAME}" -backend-config="password=${CI_JOB_TOKEN}" -backend-config="lock_method=POST" -backend-config="unlock_method=DELETE" -backend-config="retry_wait_min=5" validate: stage: validate script: - terraform validate - terraform fmt -check only: - merge_requests - main plan: stage: plan script: - terraform plan -out="planfile" artifacts: name: plan paths: - ${TF_ROOT}/environments/${CI_ENVIRONMENT_NAME}/planfile only: - merge_requests - main apply: stage: apply script: - terraform apply -input=false "planfile" dependencies: - plan when: manual only: - main environment: name: ${CI_ENVIRONMENT_NAME} destroy: stage: destroy script: - terraform destroy -auto-approve when: manual only: - main environment: name: ${CI_ENVIRONMENT_NAME} action: stop Automation Scripts #!/bin/bash # scripts/deploy.sh set -e ENVIRONMENT=${1:-dev} ACTION=${2:-plan} echo "🚀 Deploying to $ENVIRONMENT environment" cd "environments/$ENVIRONMENT" # Initialize Terraform terraform init # Validate configuration terraform validate # Format code terraform fmt -recursive case $ACTION in "plan") echo "📋 Planning infrastructure changes..." terraform plan -var-file="terraform.tfvars" ;; "apply") echo "🔨 Applying infrastructure changes..." terraform plan -var-file="terraform.tfvars" -out=tfplan terraform apply tfplan rm tfplan ;; "destroy") echo "💥 Destroying infrastructure..." terraform plan -destroy -var-file="terraform.tfvars" -out=tfplan terraform apply tfplan rm tfplan ;; *) echo "❌ Invalid action. Use: plan, apply, or destroy" exit 1 ;; esac echo "✅ Operation completed successfully!" Monitoring and Observability CloudWatch Integration # modules/monitoring/main.tf resource "aws_cloudwatch_dashboard" "main" { dashboard_name = "${var.name}-infrastructure" dashboard_body = jsonencode({ widgets = [ { type = "metric" x = 0 y = 0 width = 12 height = 6 properties = { metrics = [ ["AWS/EC2", "CPUUtilization", "AutoScalingGroupName", var.asg_name], ["AWS/ApplicationELB", "TargetResponseTime", "LoadBalancer", var.alb_name] ] period = 300 stat = "Average" region = var.aws_region title = "Infrastructure Metrics" } } ] }) } resource "aws_cloudwatch_metric_alarm" "high_cpu" { alarm_name = "${var.name}-high-cpu" comparison_operator = "GreaterThanThreshold" evaluation_periods = "2" metric_name = "CPUUtilization" namespace = "AWS/EC2" period = "300" statistic = "Average" threshold = "80" alarm_description = "This metric monitors ec2 cpu utilization" dimensions = { AutoScalingGroupName = var.asg_name } alarm_actions = [aws_sns_topic.alerts.arn] } resource "aws_sns_topic" "alerts" { name = "${var.name}-infrastructure-alerts" } Security and Compliance Terraform Security Scanning # .github/workflows/security-scan.yml name: Security Scan on: pull_request: branches: [main] jobs: security-scan: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 - name: Run Checkov uses: bridgecrewio/checkov-action@master with: directory: . framework: terraform output_format: sarif output_file_path: reports/results.sarif - name: Run TFSec uses: aquasecurity/tfsec-action@v1.0.0 with: soft_fail: true - name: Run Terrascan uses: accurics/terrascan-action@main with: iac_type: terraform iac_version: v14 policy_type: aws State File Security # Backend configuration with encryption terraform { backend "s3" { bucket = "terraform-state-secure-bucket" key = "infrastructure/terraform.tfstate" region = "us-east-1" encrypt = true kms_key_id = "arn:aws:kms:us-east-1:account:key/key-id" dynamodb_table = "terraform-locks" # Versioning enabled on S3 bucket versioning = true # Server-side encryption server_side_encryption_configuration { rule { apply_server_side_encryption_by_default { sse_algorithm = "aws:kms" kms_master_key_id = "arn:aws:kms:us-east-1:account:key/key-id" } } } } } Best Practices 1. Code Organization ✅ Use reusable modules ✅ Separate environments into directories ✅ Keep files small and focused ✅ Use consistent naming conventions ✅ Document modules and variables 2. State Management ✅ Use remote backend (S3 + DynamoDB) ✅ Enable state versioning ✅ Configure locks to avoid conflicts ✅ Encrypt state files ✅ Regularly back up the state 3. Security ✅ Use the least privilege principle ✅ Encrypt data in transit and at rest ✅ Implement resource tagging ✅ Use secrets management ✅ Perform regular security scanning 4. Performance and Costs ✅ Use data sources for existing resources ✅ Implement lifecycle rules ✅ Monitor costs with tags ✅ Use spot instances when appropriate ✅ Optimize storage classes Common Troubleshooting 1. State Lock Issues # Force unlock (use with caution) terraform force-unlock LOCK_ID # Check current state terraform show # Import existing resources terraform import aws_instance.example i-1234567890abcdef0 2. Dependency Issues # Explicit dependencies resource "aws_instance" "web" { # ... configuration ... depends_on = [ aws_security_group.web, aws_subnet.public ] } 3. Provider Version Conflicts # Lock provider versions terraform { required_providers { aws = { source = "hashicorp/aws" version = "= 5.0.1" # Exact version } } } Conclusion Terraform offers a robust platform for implementing Infrastructure as Code on AWS. By following the best practices presented, you can: ...

July 16, 2025 · 9 min · 1891 words · Matheus Costa

Advanced Amazon S3 Security: Preventing Data Leaks

Introduction Amazon S3 is one of the most widely used AWS services, storing trillions of objects globally. With this popularity comes the responsibility of implementing robust security to protect sensitive data against leaks and unauthorized access. Main Threats to S3 1. Insecure Configurations Unintentionally public buckets Permissive access policies Lack of encryption Disabled access logs 2. Common Attacks Data Exfiltration - Unauthorized data extraction Privilege Escalation - Elevation of privileges Insider Threats - Internal threats Credential Compromise - Compromised credentials Layered Security Architecture Layer 1: Access Control Granular IAM Policies { "Version": "2012-10-17", "Statement": [ { "Sid": "RestrictToSpecificBucket", "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject" ], "Resource": "arn:aws:s3:::secure-data-bucket/*", "Condition": { "StringEquals": { "s3:x-amz-server-side-encryption": "aws:kms" }, "StringLike": { "s3:x-amz-server-side-encryption-context:project": "sensitive-project" } } }, { "Sid": "DenyUnencryptedUploads", "Effect": "Deny", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::secure-data-bucket/*", "Condition": { "StringNotEquals": { "s3:x-amz-server-side-encryption": "aws:kms" } } } ] } Bucket Policies with Restrictive Conditions { "Version": "2012-10-17", "Statement": [ { "Sid": "RestrictToVPCEndpoint", "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": [ "arn:aws:s3:::secure-data-bucket", "arn:aws:s3:::secure-data-bucket/*" ], "Condition": { "StringNotEquals": { "aws:sourceVpce": "vpce-1234567890abcdef0" } } }, { "Sid": "RequireSSLRequestsOnly", "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": [ "arn:aws:s3:::secure-data-bucket", "arn:aws:s3:::secure-data-bucket/*" ], "Condition": { "Bool": { "aws:SecureTransport": "false" } } } ] } Layer 2: Encryption Server-Side Encryption with KMS # Create a dedicated KMS key aws kms create-key \ --description "S3 encryption key for sensitive data" \ --key-usage ENCRYPT_DECRYPT \ --key-spec SYMMETRIC_DEFAULT # Configure default encryption on the bucket aws s3api put-bucket-encryption \ --bucket secure-data-bucket \ --server-side-encryption-configuration '{ "Rules": [ { "ApplyServerSideEncryptionByDefault": { "SSEAlgorithm": "aws:kms", "KMSMasterKeyID": "arn:aws:kms:region:account:key/key-id" }, "BucketKeyEnabled": true } ] }' Client-Side Encryption import boto3 from botocore.client import Config import io # Configure S3 client with encryption s3_client = boto3.client( 's3', config=Config( signature_version='s3v4', s3={ 'addressing_style': 'virtual' } ) ) def upload_encrypted_object(bucket, key, data, kms_key_id): """Upload object with KMS encryption""" response = s3_client.put_object( Bucket=bucket, Key=key, Body=data, ServerSideEncryption='aws:kms', SSEKMSKeyId=kms_key_id, Metadata={ 'classification': 'confidential', 'encrypted': 'true' } ) return response # Usage example upload_encrypted_object( bucket='secure-data-bucket', key='sensitive/document.pdf', data=open('document.pdf', 'rb'), kms_key_id='arn:aws:kms:region:account:key/key-id' ) Layer 3: Monitoring and Auditing CloudTrail for S3 Data Events { "Trail": { "Name": "S3DataEventsTrail", "S3BucketName": "audit-logs-bucket", "EventSelectors": [ { "ReadWriteType": "All", "IncludeManagementEvents": false, "DataResources": [ { "Type": "AWS::S3::Object", "Values": [ "arn:aws:s3:::secure-data-bucket/*" ] } ] } ] } } S3 Access Logging # Enable access logging aws s3api put-bucket-logging \ --bucket secure-data-bucket \ --bucket-logging-status '{ "LoggingEnabled": { "TargetBucket": "access-logs-bucket", "TargetPrefix": "secure-data-bucket-logs/" } }' Implementing Advanced Controls 1. S3 Object Lock Legal Hold Configuration # Enable Object Lock on the bucket aws s3api create-bucket \ --bucket immutable-data-bucket \ --object-lock-enabled-for-bucket # Configure default retention aws s3api put-object-lock-configuration \ --bucket immutable-data-bucket \ --object-lock-configuration '{ "ObjectLockEnabled": "Enabled", "Rule": { "DefaultRetention": { "Mode": "GOVERNANCE", "Years": 7 } } }' Upload with Specific Retention def upload_with_retention(bucket, key, data, retention_days): """Upload object with specific retention""" from datetime import datetime, timedelta retention_date = datetime.utcnow() + timedelta(days=retention_days) response = s3_client.put_object( Bucket=bucket, Key=key, Body=data, ObjectLockMode='GOVERNANCE', ObjectLockRetainUntilDate=retention_date, Metadata={ 'retention-period': str(retention_days), 'legal-hold': 'active' } ) return response 2. S3 Intelligent Tiering Automatic Storage Class Configuration { "Id": "IntelligentTieringConfig", "Status": "Enabled", "Filter": { "Prefix": "sensitive-data/" }, "Tierings": [ { "Days": 90, "AccessTier": "ARCHIVE_ACCESS" }, { "Days": 180, "AccessTier": "DEEP_ARCHIVE_ACCESS" } ] } 3. Cross-Region Replication for DR Secure Replication Configuration { "Role": "arn:aws:iam::account:role/replication-role", "Rules": [ { "ID": "SecureReplication", "Status": "Enabled", "Filter": { "Prefix": "critical-data/" }, "Destination": { "Bucket": "arn:aws:s3:::backup-bucket-dr", "StorageClass": "STANDARD_IA", "EncryptionConfiguration": { "ReplicaKmsKeyID": "arn:aws:kms:region:account:key/backup-key-id" } } } ] } Anomaly Detection 1. Custom CloudWatch Metrics import boto3 import json from datetime import datetime, timedelta def analyze_s3_access_patterns(): """Analyze suspicious access patterns""" cloudwatch = boto3.client('cloudwatch') s3 = boto3.client('s3') # Hourly access metrics end_time = datetime.utcnow() start_time = end_time - timedelta(hours=24) # Fetch request metrics response = cloudwatch.get_metric_statistics( Namespace='AWS/S3', MetricName='NumberOfObjects', Dimensions=[ { 'Name': 'BucketName', 'Value': 'secure-data-bucket' } ], StartTime=start_time, EndTime=end_time, Period=3600, Statistics=['Sum'] ) # Detect anomalous spikes values = [point['Sum'] for point in response['Datapoints']] avg = sum(values) / len(values) for point in response['Datapoints']: if point['Sum'] > avg * 3: # 3x above average send_alert(f"Anomalous S3 access detected: {point['Sum']} requests at {point['Timestamp']}") def send_alert(message): """Send alert via SNS""" sns = boto3.client('sns') sns.publish( TopicArn='arn:aws:sns:region:account:security-alerts', Message=message, Subject='S3 Security Alert' ) 2. GuardDuty for S3 S3 Protection Configuration # Enable S3 protection in GuardDuty aws guardduty create-s3-protection \ --detector-id detector-id \ --enable Automated Response to Findings def handle_guardduty_s3_finding(event, context): """Automatically respond to GuardDuty findings""" finding = json.loads(event['Records'][0]['Sns']['Message']) if 'S3' in finding['type']: bucket_name = finding['service']['resourceRole']['bucketName'] # Actions based on finding type if 'Exfiltration' in finding['type']: # Block public access immediately block_public_access(bucket_name) elif 'Persistence' in finding['type']: # Review bucket policies audit_bucket_policies(bucket_name) # Notify security team notify_security_team(finding) def block_public_access(bucket_name): """Block public access to the bucket""" s3 = boto3.client('s3') s3.put_public_access_block( Bucket=bucket_name, PublicAccessBlockConfiguration={ 'BlockPublicAcls': True, 'IgnorePublicAcls': True, 'BlockPublicPolicy': True, 'RestrictPublicBuckets': True } ) Compliance and Governance 1. AWS Config Rules Rule for Mandatory Encryption { "ConfigRuleName": "s3-bucket-server-side-encryption-enabled", "Source": { "Owner": "AWS", "SourceIdentifier": "S3_BUCKET_SERVER_SIDE_ENCRYPTION_ENABLED" }, "Scope": { "ComplianceResourceTypes": [ "AWS::S3::Bucket" ] } } Rule for Public Access Blocking { "ConfigRuleName": "s3-bucket-public-access-prohibited", "Source": { "Owner": "AWS", "SourceIdentifier": "S3_BUCKET_PUBLIC_ACCESS_PROHIBITED" }, "Scope": { "ComplianceResourceTypes": [ "AWS::S3::Bucket" ] } } 2. Remediation Automation def auto_remediate_s3_compliance(event, context): """Automatically remediate compliance issues""" config_item = event['configurationItem'] bucket_name = config_item['resourceName'] if config_item['resourceType'] == 'AWS::S3::Bucket': # Check if bucket is public if is_bucket_public(bucket_name): block_public_access(bucket_name) # Check encryption if not is_bucket_encrypted(bucket_name): enable_bucket_encryption(bucket_name) # Check logging if not is_logging_enabled(bucket_name): enable_access_logging(bucket_name) def is_bucket_public(bucket_name): """Check if bucket has public access""" s3 = boto3.client('s3') try: response = s3.get_public_access_block(Bucket=bucket_name) config = response['PublicAccessBlockConfiguration'] return not all([ config.get('BlockPublicAcls', False), config.get('IgnorePublicAcls', False), config.get('BlockPublicPolicy', False), config.get('RestrictPublicBuckets', False) ]) except: return True # Assume public if unable to verify Implementation Best Practices 1. Security Principles Defense in Depth # Example CloudFormation stack with multiple layers Resources: SecureBucket: Type: AWS::S3::Bucket Properties: BucketName: !Sub "${AWS::StackName}-secure-data" BucketEncryption: ServerSideEncryptionConfiguration: - ServerSideEncryptionByDefault: SSEAlgorithm: aws:kms KMSMasterKeyID: !Ref S3KMSKey PublicAccessBlockConfiguration: BlockPublicAcls: true BlockPublicPolicy: true IgnorePublicAcls: true RestrictPublicBuckets: true LoggingConfiguration: DestinationBucketName: !Ref AccessLogsBucket LogFilePrefix: access-logs/ NotificationConfiguration: CloudWatchConfigurations: - Event: s3:ObjectCreated:* CloudWatchConfiguration: LogGroupName: !Ref S3LogGroup 2. Continuous Monitoring S3 Security Dashboard { "widgets": [ { "type": "metric", "properties": { "metrics": [ ["AWS/S3", "BucketRequests", "BucketName", "secure-data-bucket", "FilterId", "EntireBucket"], ["AWS/S3", "AllRequests", "BucketName", "secure-data-bucket", "FilterId", "EntireBucket"] ], "period": 300, "stat": "Sum", "region": "us-east-1", "title": "S3 Request Volume" } }, { "type": "log", "properties": { "query": "SOURCE '/aws/s3/access-logs' | fields @timestamp, remote_ip, request_uri, http_status\n| filter http_status >= 400\n| stats count() by remote_ip\n| sort count desc\n| limit 10", "region": "us-east-1", "title": "Top Error Sources" } } ] } Costs and Optimization Cost-Benefit Analysis Security Control Monthly Cost Benefit ROI KMS Encryption $1-10 High 1000%+ CloudTrail Data Events $10-50 Medium 500% GuardDuty S3 Protection $5-25 High 800% Config Rules $2-10 Medium 300% Cross-Region Replication $20-100 High 400% Cost Optimization def optimize_s3_security_costs(): """Optimize S3 security costs""" # 1. Use Intelligent Tiering for less accessed data # 2. Configure lifecycle policies # 3. Compress data before upload # 4. Use S3 Transfer Acceleration only when needed # 5. Monitor KMS key usage lifecycle_config = { 'Rules': [ { 'ID': 'SecurityOptimization', 'Status': 'Enabled', 'Filter': {'Prefix': 'logs/'}, 'Transitions': [ { 'Days': 30, 'StorageClass': 'STANDARD_IA' }, { 'Days': 90, 'StorageClass': 'GLACIER' } ] } ] } return lifecycle_config Conclusion Amazon S3 security requires a holistic approach that combines: ...

July 16, 2025 · 6 min · 1227 words · Matheus Costa

Advanced Ransomware Protection on AWS: Strategies and Implementation

Introduction Ransomware attacks represent one of the biggest threats to corporate security today. On AWS, implementing a robust protection strategy is essential to maintain business continuity and protect critical data. Understanding the Threat What is Ransomware? Ransomware is a type of malware that: Encrypts data and systems Demands payment for decryption Paralyzes business operations Causes significant financial losses Common Attack Vectors Phishing and social engineering Application vulnerabilities Compromised credentials Inadequate privileged access Insecure configurations Protection Strategies on AWS 1. Backup and Recovery AWS Backup { "BackupPlan": { "BackupPlanName": "RansomwareProtection", "Rules": [ { "RuleName": "DailyBackups", "TargetBackupVault": "SecureVault", "ScheduleExpression": "cron(0 2 ? * * *)", "Lifecycle": { "DeleteAfterDays": 90, "MoveToColdStorageAfterDays": 30 } } ] } } Backup Vault Configuration # Create backup vault with encryption aws backup create-backup-vault \ --backup-vault-name SecureVault \ --encryption-key-arn arn:aws:kms:region:account:key/key-id \ --backup-vault-tags Key=Purpose,Value=RansomwareProtection 2. Access Control (IAM) Principle of Least Privilege { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:PutObject" ], "Resource": "arn:aws:s3:::secure-bucket/*", "Condition": { "StringEquals": { "s3:x-amz-server-side-encryption": "AES256" } } } ] } MFA for Critical Operations { "Version": "2012-10-17", "Statement": [ { "Effect": "Deny", "Action": "*", "Resource": "*", "Condition": { "BoolIfExists": { "aws:MultiFactorAuthPresent": "false" } } } ] } 3. Monitoring and Detection CloudTrail for Auditing { "Trail": { "Name": "SecurityAuditTrail", "S3BucketName": "security-logs-bucket", "IncludeGlobalServiceEvents": true, "IsMultiRegionTrail": true, "EnableLogFileValidation": true, "EventSelectors": [ { "ReadWriteType": "All", "IncludeManagementEvents": true, "DataResources": [ { "Type": "AWS::S3::Object", "Values": ["arn:aws:s3:::critical-data/*"] } ] } ] } } GuardDuty for Threat Detection # Enable GuardDuty aws guardduty create-detector \ --enable \ --finding-publishing-frequency FIFTEEN_MINUTES 4. Network Segmentation VPC with Isolation VPC: Type: AWS::EC2::VPC Properties: CidrBlock: 10.0.0.0/16 EnableDnsHostnames: true EnableDnsSupport: true Tags: - Key: Name Value: SecureVPC PrivateSubnet: Type: AWS::EC2::Subnet Properties: VpcId: !Ref VPC CidrBlock: 10.0.1.0/24 AvailabilityZone: !Select [0, !GetAZs ''] Tags: - Key: Name Value: PrivateSubnet Restrictive Security Groups { "GroupDescription": "Secure access only", "SecurityGroupRules": [ { "IpProtocol": "tcp", "FromPort": 443, "ToPort": 443, "CidrIp": "10.0.0.0/16" } ] } Implementing Specific Controls 1. S3 Bucket Protection Versioning and MFA Delete # Enable versioning aws s3api put-bucket-versioning \ --bucket critical-data-bucket \ --versioning-configuration Status=Enabled,MfaDelete=Enabled \ --mfa "arn:aws:iam::account:mfa/user serial-number" # Configure lifecycle for old versions aws s3api put-bucket-lifecycle-configuration \ --bucket critical-data-bucket \ --lifecycle-configuration file://lifecycle.json Object Lock for Immutability { "ObjectLockEnabled": "Enabled", "Rule": { "DefaultRetention": { "Mode": "GOVERNANCE", "Days": 30 } } } 2. RDS Protection Automated Backup # Configure automated backup aws rds modify-db-instance \ --db-instance-identifier production-db \ --backup-retention-period 30 \ --preferred-backup-window "03:00-04:00" \ --delete-automated-backups false Manual Snapshot # Create manual snapshot aws rds create-db-snapshot \ --db-instance-identifier production-db \ --db-snapshot-identifier manual-snapshot-$(date +%Y%m%d) 3. EBS Volume Protection Automated Snapshots import boto3 from datetime import datetime def create_ebs_snapshots(): ec2 = boto3.client('ec2') # List volumes volumes = ec2.describe_volumes() for volume in volumes['Volumes']: volume_id = volume['VolumeId'] # Create snapshot snapshot = ec2.create_snapshot( VolumeId=volume_id, Description=f'Automated snapshot - {datetime.now().isoformat()}', TagSpecifications=[ { 'ResourceType': 'snapshot', 'Tags': [ {'Key': 'Purpose', 'Value': 'RansomwareProtection'}, {'Key': 'CreatedBy', 'Value': 'AutomatedBackup'} ] } ] ) print(f"Snapshot {snapshot['SnapshotId']} created for volume {volume_id}") Monitoring and Alerts 1. CloudWatch Alarms Suspicious Activity Detection { "AlarmName": "SuspiciousS3Activity", "MetricName": "NumberOfObjects", "Namespace": "AWS/S3", "Statistic": "Sum", "Period": 300, "EvaluationPeriods": 2, "Threshold": 1000, "ComparisonOperator": "GreaterThanThreshold", "AlarmActions": [ "arn:aws:sns:region:account:security-alerts" ] } 2. EventBridge Rules Critical Event Monitoring { "Name": "RansomwareDetection", "EventPattern": { "source": ["aws.guardduty"], "detail-type": ["GuardDuty Finding"], "detail": { "type": [ "Trojan:EC2/BlackholeTraffic", "Backdoor:EC2/C&CActivity.B", "CryptoCurrency:EC2/BitcoinTool.B" ] } }, "Targets": [ { "Id": "1", "Arn": "arn:aws:lambda:region:account:function:IncidentResponse" } ] } Incident Response 1. Automated Response Plan import boto3 import json def incident_response_handler(event, context): """ Lambda function for automated incident response """ # Parse GuardDuty event finding = json.loads(event['Records'][0]['Sns']['Message']) if finding['severity'] >= 7.0: # High severity # 1. Isolate compromised instance isolate_instance(finding['service']['resourceRole']) # 2. Create forensic snapshot create_forensic_snapshot(finding['service']['resourceRole']) # 3. Notify security team notify_security_team(finding) # 4. Trigger emergency backup trigger_emergency_backup() def isolate_instance(resource_info): """Isolate suspicious instance""" ec2 = boto3.client('ec2') instance_id = resource_info['instanceDetails']['instanceId'] # Create restrictive security group sg_response = ec2.create_security_group( GroupName=f'quarantine-{instance_id}', Description='Quarantine security group' ) # Apply to instance ec2.modify_instance_attribute( InstanceId=instance_id, Groups=[sg_response['GroupId']] ) 2. Recovery Procedures Data Restoration #!/bin/bash # Data recovery script BACKUP_VAULT="SecureVault" RECOVERY_POINT_ARN="$1" # Restore RDS aws backup start-restore-job \ --recovery-point-arn $RECOVERY_POINT_ARN \ --metadata DBInstanceIdentifier=recovered-db \ --iam-role-arn arn:aws:iam::account:role/BackupRole # Restore EBS aws backup start-restore-job \ --recovery-point-arn $RECOVERY_POINT_ARN \ --metadata VolumeType=gp3,VolumeSize=100 \ --iam-role-arn arn:aws:iam::account:role/BackupRole echo "Recovery jobs initiated" Best Practices 1. Prevention ✅ Implement MFA on all accounts ✅ Use the principle of least privilege ✅ Keep systems up to date ✅ Train teams on phishing awareness ✅ Segment networks properly 2. Detection ✅ Monitor logs continuously ✅ Configure real-time alerts ✅ Use threat intelligence tools ✅ Implement honeypots ✅ Analyze anomalous behavior 3. Response ✅ Have a documented response plan ✅ Practice regular simulations ✅ Maintain tested backups ✅ Define communication channels ✅ Document lessons learned Costs and ROI Security Investment Service Estimated Monthly Cost Benefit AWS Backup $50-200 Fast recovery GuardDuty $30-100 Early detection CloudTrail $20-50 Complete auditing Config $40-80 Compliance Protection ROI Average ransomware cost: $4.45 million Protection investment: $10-50k/year ROI: 8,900% - 44,500% Conclusion Ransomware protection on AWS requires a layered approach that combines: ...

July 16, 2025 · 5 min · 872 words · Matheus Costa

Introduction to AWS Lambda: Serverless Computing

What is AWS Lambda? AWS Lambda is a serverless computing service that lets you run code without provisioning or managing servers. You pay only for the compute time you consume. Key Features 1. Serverless No infrastructure management Automatic scaling Built-in high availability 2. Pricing Model Pay-per-use Billed per millisecond 1 million free requests per month 3. Supported Languages Python Node.js Java C# Go Ruby Practical Example Here is a simple example of a Lambda function in Python: ...

July 16, 2025 · 1 min · 199 words · Matheus Costa

Zero Trust Architecture: The Future of Corporate Security

What is Zero Trust? Zero Trust is a security model that operates under the principle “never trust, always verify.” Unlike traditional models that trust users within the network perimeter. Fundamental Principles 1. Explicit Verification Authenticate and authorize based on all available data points User identity, location, device, service, or workload Data classification and anomalies 2. Least Privilege Access Limit user access with Just-In-Time and Just-Enough-Access (JIT/JEA) Risk-based adaptive policies Data protection 3. Assume Breach Minimize blast radius and segment access Verify end-to-end encryption Use analytics to gain visibility and detect threats Architecture Components graph TD A[User] --> B[Identity Provider] B --> C[Policy Engine] C --> D[Access Gateway] D --> E[Protected Resources] F[Device Trust] --> C G[Network Security] --> C H[Data Classification] --> C Practical Implementation 1. Identity and Access Management (IAM) Multi-Factor Authentication (MFA) Single Sign-On (SSO) Privileged Access Management (PAM) 2. Network Segmentation Micro-segmentation Software-Defined Perimeter (SDP) Network Access Control (NAC) 3. Device Security Mobile Device Management (MDM) Endpoint Detection and Response (EDR) Device compliance policies Tools and Technologies Cloud Providers AWS: IAM, GuardDuty, Security Hub Azure: Azure AD, Conditional Access GCP: Identity-Aware Proxy, BeyondCorp Specialized Solutions Okta, Auth0 (Identity) Zscaler, Cloudflare (Network) CrowdStrike, SentinelOne (Endpoint) Implementation Challenges Technical Complexity ...

July 16, 2025 · 2 min · 307 words · Matheus Costa

CI/CD with GitHub Actions: Automating Deploy to Cloudflare Pages

Introduction to CI/CD Continuous Integration/Continuous Deployment (CI/CD) is an essential practice in modern development that automates the process of code integration, testing, and deployment. Why GitHub Actions? Advantages ✅ Native integration with GitHub ✅ Free for public repositories ✅ Marketplace with thousands of actions ✅ Support for multiple languages and platforms ✅ Execution in Docker containers Basic Concepts Workflow: Automated process Job: Set of steps executed on a runner Step: Individual task Action: Reusable code block Setting Up the Pipeline 1. Workflow Structure Create the file .github/workflows/deploy.yml: ...

July 16, 2025 · 3 min · 507 words · Matheus Costa