aws cloudformation start stop instance

AWSTemplateFormatVersion: ‘2010-09-09’
Description: ‘Lambda function to start and stop an instance based on a schedule’
Parameters:
InstanceIds:
Type: ‘List<AWS::EC2::Instance::Id>’
Description: ‘The instances to start/stop on a schedule’
StartCron:
Type: String
Description: ‘The schedule expression to start the instance’
StopCron:
Type: String
Description: ‘The schedule expression to stop the instance’
Resources:
FunctionRole:
Type: ‘AWS::IAM::Role’
Properties:
AssumeRolePolicyDocument:
Version: ‘2012-10-17’
Statement:
– Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: ‘sts:AssumeRole’
Path: /
Policies:
– PolicyName: root
PolicyDocument:
Version: ‘2012-10-17’
Statement:
– Effect: Allow
Action:
– ‘logs:*’
Resource: ‘arn:aws:logs:*:*:*’
– Effect: Allow
Action:
– ‘ec2:StopInstances’
– ‘ec2:StartInstances’
Resource: !Sub ‘arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:instance/*’
StartFunction:
Type: ‘AWS::Lambda::Function’
Properties:
Handler: index.handler
Role: !GetAtt [FunctionRole, Arn]
Code:
ZipFile: !Sub
– |
‘use strict’;

var AWS = require(‘aws-sdk’);
var ec2 = new AWS.EC2({region: process.env.AWS_REGION});

exports.handler = function (event, context, callback) {
var params = {
InstanceIds: ‘${Instances}’.split(‘,’)
};
ec2.startInstances(params, function (err, data) {
if (err) {
callback(err, err.stack);
} else {
callback(null, data);
}
});
};
– Instances: !Join [“,”, !Ref InstanceIds]
Runtime: nodejs4.3
Timeout: ’30’
StartRule:
Type: ‘AWS::Events::Rule’
Properties:
ScheduleExpression:
Ref: StartCron
Targets:
– Id: StartInstanceScheduler
Arn: !GetAtt [StartFunction, Arn]
StartInvokeLambdaPermission:
Type: ‘AWS::Lambda::Permission’
Properties:
FunctionName: !GetAtt [StartFunction, Arn]
Action: ‘lambda:InvokeFunction’
Principal: events.amazonaws.com
SourceArn: !GetAtt [StartRule, Arn]
StopFunction:
Type: ‘AWS::Lambda::Function’
Properties:
Handler: index.handler
Role: !GetAtt [FunctionRole, Arn]
Code:
ZipFile: !Sub
– |
‘use strict’;

var AWS = require(‘aws-sdk’);
var ec2 = new AWS.EC2({region: process.env.AWS_REGION});

exports.handler = function (event, context, callback) {
var params = {
InstanceIds: ‘${Instances}’.split(‘,’)
};
ec2.stopInstances(params, function (err, data) {
if (err) {
callback(err, err.stack);
} else {
callback(null, data);
}
});
};
– Instances: !Join [“,”, !Ref InstanceIds]
Runtime: nodejs4.3
Timeout: ’30’
StopRule:
Type: ‘AWS::Events::Rule’
Properties:
ScheduleExpression:
Ref: StopCron
Targets:
– Id: StopInstanceScheduler
Arn: !GetAtt [StopFunction, Arn]
StopInvokeLambdaPermission:
Type: ‘AWS::Lambda::Permission’
Properties:
FunctionName: !GetAtt [StopFunction, Arn]
Action: ‘lambda:InvokeFunction’
Principal: events.amazonaws.com
SourceArn: !GetAtt [StopRule, Arn]

ALB Log Analysis with AWS Athena

Let analsysis ALB Logfiles with Athena.

First THX to Rob Witoff https://medium.com/@robwitoff/athena-alb-log-analysis-b874d0958909 for your excellent example.

Next THX is to AWS Support that helped me to figure out some issues.

 

I tested below CREATE TABLE query with your sample data, and using below schema helped to read your alb logs data successfully:

 


CREATE EXTERNAL TABLE IF NOT EXISTS logs.web_alb
( type string,
time string,
elb string,
client_ip string,
client_port int,
target_ip string,
target_port int,
request_processing_time double,
target_processing_time double,
response_processing_time double,
elb_status_code string,
target_status_code string,
received_bytes bigint,
sent_bytes bigint,
request_verb string,
request_url string,
request_proto string,
user_agent string,
ssl_cipher string,
ssl_protocol string,
target_group_arn string,
trace_id string) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe'
WITH SERDEPROPERTIES (
'serialization.format' = '1','input.regex' = '([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*):([0-9]*) ([^ ]*)[:\-]([0-9]*) ([-.0-9]*) ([-.0-9]*) ([-.0-9]*) (|[-0-9]*) (-|[-0-9]*) ([-0-9]*) ([-0-9]*) \"([^ ]*) ([^ ]*) (- |[^ ]*)\" (\"[^\"]*\") ([A-Z0-9-]+) ([A-Za-z0-9.-]*) ([^ ]*) (.*)' )
LOCATION 's3://<your_bucket>/'

Please note that in your sample data, there was no value for year, month and day as provided in the PARTITIONED BY clause in your create table query: PARTITIONED BY(year string, month string, day string) Hence I had to remove the PARTITIONED BY clause from your query. Can you try creating table using above schema and see if that helps in reading alb logs.

 

 

 

  1. We must activate enable access logs in the properties from ALB.

 

athena_elb_create_s3_bucket

Enable access logs, choose a name of your bucket and say create this location for me. After this Athena have access to the log files also.

You can generate some data with benchmark tools of the website or you wait some minutes to have some data to analysis.

Then we go to Athena

now it will be a little bit tricky

we must create a db from hand, say add table

athena_create_db

 

fill it out, database is logs

Table name is web_alb

put your s3 url into it

s3://com.mywebsite.httplog2/AWSLogs/595264310722/elasticloadbalancing/eu-central-1/2017/05/

athena_create_db_2

Say next and choose Apache Web Logs

 

athena_create_db_3

add one table with name test, we will delete it later

athena_create_db_4

 

athena_create_db_5

Do nothing here and click, create table

 

Now we have created a Database with Name logs and a table with Name web_alb

athena_create_db_6.jpg

now we just drop the table web_alb

DROP TABLE IF EXISTS web_alb PURGE;

 

athena_drob_table

so then we fill in the correct table with all columns that we need

 

 


CREATE EXTERNAL TABLE IF NOT EXISTS logs.web_alb (
 type string,
 time string, 
 elb string, 
 client_ip string, 
 client_port int, 
 target_ip string, 
 target_port int, 
 request_processing_time double, 
 target_processing_time double, 
 response_processing_time double, 
 elb_status_code string, 
 target_status_code string, 
 received_bytes bigint, 
 sent_bytes bigint, 
 request_verb string, 
 request_url string, 
 request_proto string, 
 user_agent string, 
 ssl_cipher string, 
 ssl_protocol string,
 target_group_arn string,
 trace_id string) 
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe'
WITH SERDEPROPERTIES (
 'serialization.format' = '1','input.regex' = '([^ ]*) ([^ ]*) ([^ ]*) ([^ ]*):([0-9]*) ([^ ]*)[:\-]([0-9]*) ([-.0-9]*) ([-.0-9]*) ([-.0-9]*) (|[-0-9]*) (-|[-0-9]*) ([-0-9]*) ([-0-9]*) \"([^ ]*) ([^ ]*) (- |[^ ]*)\" (\"[^\"]*\") ([A-Z0-9-]+) ([A-Za-z0-9.-]*) ([^ ]*) (.*)' ) 
LOCATION 's3://com.mysite.httplog2/AWSLogs/595264310722/elasticloadbalancing/eu-central-1/2017/05/'

 

athena_create_db_7

 

 

we will fire up our first select

SELECT * FROM logs.web_alb LIMIT 100;

athena_first_select

That´s was all

Example for most visitor sorted by most visit with user agent

SELECT user_agent, client_ip, COUNT(*) as count
FROM logs.web_alb
GROUP BY user_agent, client_ip
ORDER BY COUNT(*) DESC

 

 

 

 

EC2 Plugin Jenkins with IAM Role

Use this for Jenkins and Slave it self

 

Important is am:PassRole

 

https://aws.amazon.com/blogs/security/granting-permission-to-launch-ec2-instances-with-iam-roles-passrole-permission/

https://engineering.aol.com/bits/574213645bf6233aab3f2c71/cross-account-aws-deploployments-in-jenkins

https://wiki.jenkins-ci.org/display/JENKINS/Amazon+EC2+Plugin

{
“Version”: “2012-10-17”,
“Statement”: [
{
“Action”: [
“ec2:DescribeRegions”
],
“Effect”: “Allow”,
“Resource”: “*”
},
{
“Action”: [
“ec2:CreateTags”,
“ec2:DescribeInstances”,
“ec2:DescribeKeyPairs”,
“ec2:GetConsoleOutput”,
“ec2:RunInstances”,
“ec2:StartInstances”,
“ec2:StopInstances”,
“ec2:DescribeTags”,
“ec2:DeleteTags”,
“ec2:DescribeRegions”,
“ec2:DescribeAvailabilityZones”,
“ec2:DescribeSecurityGroups”,
“ec2:DescribeSubnets”,
“ec2:DescribeImages”,
“iam:PassRole”,
“ec2:TerminateInstances”
],
“Effect”: “Allow”,
“Resource”: “*”,
“Condition”: {
“StringEquals”: {
“ec2:Region”: “eu-central-1”
}
}
}
]
}