Merge remote-tracking branch 'spulec/master'

This commit is contained in:
Alexander Mohr 2017-11-30 13:01:33 -08:00
commit c36f1b6aab
138 changed files with 15011 additions and 852 deletions

View File

@ -47,3 +47,4 @@ Moto is written by Steve Pulec with contributions from:
* [Adam Stauffer](https://github.com/adamstauffer)
* [Guy Templeton](https://github.com/gjtempleton)
* [Michael van Tellingen](https://github.com/mvantellingen)
* [Jessie Nadler](https://github.com/nadlerjessie)

View File

@ -3,6 +3,43 @@ Moto Changelog
Latest
------
1.1.25
-----
* Implemented Iot and Iot-data
* Implemented resource tagging API
* EC2 AMIs now have owners
* Improve codegen scaffolding
* Many small fixes to EC2 support
* CloudFormation ELBv2 support
* UTF fixes for S3
* Implemented SSM get_parameters_by_path
* More advanced Dynamodb querying
1.1.24
-----
* Implemented Batch
* Fixed regression with moto_server dashboard
* Fixed and closed many outstanding bugs
* Fixed serious performance problem with EC2 reservation listing
* Fixed Route53 list_resource_record_sets
1.1.23
-----
* Implemented X-Ray
* Implemented Autoscaling EC2 attachment
* Implemented Autoscaling Load Balancer methods
* Improved DynamoDB filter expressions
1.1.22
-----
* Lambda policies
* Dynamodb filter expressions
* EC2 Spot fleet improvements
1.1.21
-----

3677
IMPLEMENTATION_COVERAGE.md Normal file

File diff suppressed because it is too large Load Diff

View File

@ -1,4 +1,6 @@
include README.md LICENSE AUTHORS.md
include requirements.txt requirements-dev.txt tox.ini
include moto/ec2/resources/instance_types.json
include moto/ec2/resources/amis.json
recursive-include moto/templates *
recursive-include tests *

View File

@ -1,5 +1,13 @@
SHELL := /bin/bash
ifeq ($(TEST_SERVER_MODE), true)
# exclude test_iot and test_iotdata for now
# because authentication of iot is very complicated
TEST_EXCLUDE := --exclude='test_iot.*'
else
TEST_EXCLUDE :=
endif
init:
@python setup.py develop
@pip install -r requirements.txt
@ -10,8 +18,7 @@ lint:
test: lint
rm -f .coverage
rm -rf cover
@nosetests -sv --with-coverage --cover-html ./tests/
@nosetests -sv --with-coverage --cover-html ./tests/ $(TEST_EXCLUDE)
test_server:
@TEST_SERVER_MODE=true nosetests -sv --with-coverage --cover-html ./tests/
@ -29,7 +36,14 @@ tag_github_release:
git tag `python setup.py --version`
git push origin `python setup.py --version`
publish: upload_pypi_artifact push_dockerhub_image tag_github_release
publish: implementation_coverage \
upload_pypi_artifact \
tag_github_release \
push_dockerhub_image
implementation_coverage:
./scripts/implementation_coverage.py > IMPLEMENTATION_COVERAGE.md
git commit IMPLEMENTATION_COVERAGE.md -m "Updating implementation coverage"
scaffold:
@pip install -r requirements-dev.txt > /dev/null

View File

@ -68,10 +68,12 @@ It gets even better! Moto isn't just for Python code and it isn't just for S3. L
|------------------------------------------------------------------------------|
| Cloudwatch | @mock_cloudwatch | basic endpoints done |
|------------------------------------------------------------------------------|
| CloudwatchEvents | @mock_events | all endpoints done |
|------------------------------------------------------------------------------|
| Data Pipeline | @mock_datapipeline| basic endpoints done |
|------------------------------------------------------------------------------|
| DynamoDB | @mock_dynamodb | core endpoints done |
| DynamoDB2 | @mock_dynamodb2 | core endpoints + partial indexes |
| DynamoDB2 | @mock_dynamodb2 | all endpoints + partial indexes |
|------------------------------------------------------------------------------|
| EC2 | @mock_ec2 | core endpoints done |
| - AMI | | core endpoints done |
@ -86,7 +88,7 @@ It gets even better! Moto isn't just for Python code and it isn't just for S3. L
|------------------------------------------------------------------------------|
| ELB | @mock_elb | core endpoints done |
|------------------------------------------------------------------------------|
| ELBv2 | @mock_elbv2 | core endpoints done |
| ELBv2 | @mock_elbv2 | all endpoints done |
|------------------------------------------------------------------------------|
| EMR | @mock_emr | core endpoints done |
|------------------------------------------------------------------------------|
@ -94,6 +96,9 @@ It gets even better! Moto isn't just for Python code and it isn't just for S3. L
|------------------------------------------------------------------------------|
| IAM | @mock_iam | core endpoints done |
|------------------------------------------------------------------------------|
| IoT | @mock_iot | core endpoints done |
| | @mock_iotdata | core endpoints done |
|------------------------------------------------------------------------------|
| Lambda | @mock_lambda | basic endpoints done, requires |
| | | docker |
|------------------------------------------------------------------------------|
@ -115,7 +120,7 @@ It gets even better! Moto isn't just for Python code and it isn't just for S3. L
|------------------------------------------------------------------------------|
| S3 | @mock_s3 | core endpoints done |
|------------------------------------------------------------------------------|
| SES | @mock_ses | core endpoints done |
| SES | @mock_ses | all endpoints done |
|------------------------------------------------------------------------------|
| SNS | @mock_sns | all endpoints done |
|------------------------------------------------------------------------------|
@ -127,7 +132,7 @@ It gets even better! Moto isn't just for Python code and it isn't just for S3. L
|------------------------------------------------------------------------------|
| SWF | @mock_swf | basic endpoints done |
|------------------------------------------------------------------------------|
| X-Ray | @mock_xray | core endpoints done |
| X-Ray | @mock_xray | all endpoints done |
|------------------------------------------------------------------------------|
```
@ -297,6 +302,7 @@ boto3.resource(
## Install
```console
$ pip install moto
```

View File

@ -38,8 +38,12 @@ from .sts import mock_sts, mock_sts_deprecated # flake8: noqa
from .ssm import mock_ssm # flake8: noqa
from .route53 import mock_route53, mock_route53_deprecated # flake8: noqa
from .swf import mock_swf, mock_swf_deprecated # flake8: noqa
from .xray import mock_xray # flake8: noqa
from .xray import mock_xray, mock_xray_client, XRaySegment # flake8: noqa
from .logs import mock_logs, mock_logs_deprecated # flake8: noqa
from .batch import mock_batch # flake8: noqa
from .resourcegroupstaggingapi import mock_resourcegroupstaggingapi # flake8: noqa
from .iot import mock_iot # flake8: noqa
from .iotdata import mock_iotdata # flake8: noqa
try:

View File

@ -170,7 +170,7 @@ class CertBundle(BaseModel):
try:
self._cert = cryptography.x509.load_pem_x509_certificate(self.cert, default_backend())
now = datetime.datetime.now()
now = datetime.datetime.utcnow()
if self._cert.not_valid_after < now:
raise AWSValidationException('The certificate has expired, is not valid.')

View File

@ -185,7 +185,7 @@ class AWSCertificateManagerResponse(BaseResponse):
idempotency_token = self._get_param('IdempotencyToken')
subject_alt_names = self._get_param('SubjectAlternativeNames')
if len(subject_alt_names) > 10:
if subject_alt_names is not None and len(subject_alt_names) > 10:
# There is initial AWS limit of 10
msg = 'An ACM limit has been exceeded. Need to request SAN limit to be raised'
return json.dumps({'__type': 'LimitExceededException', 'message': msg}), dict(status=400)

View File

@ -0,0 +1,14 @@
from __future__ import unicode_literals
from moto.core.exceptions import RESTError
class AutoscalingClientError(RESTError):
code = 500
class ResourceContentionError(AutoscalingClientError):
def __init__(self):
super(ResourceContentionError, self).__init__(
"ResourceContentionError",
"You already have a pending update to an Auto Scaling resource (for example, a group, instance, or load balancer).")

View File

@ -4,7 +4,11 @@ from moto.compat import OrderedDict
from moto.core import BaseBackend, BaseModel
from moto.ec2 import ec2_backends
from moto.elb import elb_backends
from moto.elbv2 import elbv2_backends
from moto.elb.exceptions import LoadBalancerNotFoundError
from .exceptions import (
ResourceContentionError,
)
# http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AS_Concepts.html#Cooldown
DEFAULT_COOLDOWN = 300
@ -13,9 +17,10 @@ ASG_NAME_TAG = "aws:autoscaling:groupName"
class InstanceState(object):
def __init__(self, instance, lifecycle_state="InService"):
def __init__(self, instance, lifecycle_state="InService", health_status="Healthy"):
self.instance = instance
self.lifecycle_state = lifecycle_state
self.health_status = health_status
class FakeScalingPolicy(BaseModel):
@ -146,7 +151,7 @@ class FakeAutoScalingGroup(BaseModel):
def __init__(self, name, availability_zones, desired_capacity, max_size,
min_size, launch_config_name, vpc_zone_identifier,
default_cooldown, health_check_period, health_check_type,
load_balancers, placement_group, termination_policies,
load_balancers, target_group_arns, placement_group, termination_policies,
autoscaling_backend, tags):
self.autoscaling_backend = autoscaling_backend
self.name = name
@ -163,6 +168,7 @@ class FakeAutoScalingGroup(BaseModel):
self.health_check_period = health_check_period
self.health_check_type = health_check_type if health_check_type else "EC2"
self.load_balancers = load_balancers
self.target_group_arns = target_group_arns
self.placement_group = placement_group
self.termination_policies = termination_policies
@ -176,9 +182,10 @@ class FakeAutoScalingGroup(BaseModel):
launch_config_name = properties.get("LaunchConfigurationName")
load_balancer_names = properties.get("LoadBalancerNames", [])
target_group_arns = properties.get("TargetGroupARNs", [])
backend = autoscaling_backends[region_name]
group = backend.create_autoscaling_group(
group = backend.create_auto_scaling_group(
name=resource_name,
availability_zones=properties.get("AvailabilityZones", []),
desired_capacity=properties.get("DesiredCapacity"),
@ -191,6 +198,7 @@ class FakeAutoScalingGroup(BaseModel):
health_check_period=properties.get("HealthCheckGracePeriod"),
health_check_type=properties.get("HealthCheckType"),
load_balancers=load_balancer_names,
target_group_arns=target_group_arns,
placement_group=None,
termination_policies=properties.get("TerminationPolicies", []),
tags=properties.get("Tags", []),
@ -207,13 +215,13 @@ class FakeAutoScalingGroup(BaseModel):
def delete_from_cloudformation_json(cls, resource_name, cloudformation_json, region_name):
backend = autoscaling_backends[region_name]
try:
backend.delete_autoscaling_group(resource_name)
backend.delete_auto_scaling_group(resource_name)
except KeyError:
pass
def delete(self, region_name):
backend = autoscaling_backends[region_name]
backend.delete_autoscaling_group(self.name)
backend.delete_auto_scaling_group(self.name)
@property
def physical_resource_id(self):
@ -221,7 +229,7 @@ class FakeAutoScalingGroup(BaseModel):
def update(self, availability_zones, desired_capacity, max_size, min_size,
launch_config_name, vpc_zone_identifier, default_cooldown,
health_check_period, health_check_type, load_balancers,
health_check_period, health_check_type,
placement_group, termination_policies):
if availability_zones:
self.availability_zones = availability_zones
@ -259,27 +267,8 @@ class FakeAutoScalingGroup(BaseModel):
# Need more instances
count_needed = int(self.desired_capacity) - int(curr_instance_count)
propagated_tags = {}
for tag in self.tags:
# boto uses 'propagate_at_launch
# boto3 and cloudformation use PropagateAtLaunch
if 'propagate_at_launch' in tag and tag['propagate_at_launch'] == 'true':
propagated_tags[tag['key']] = tag['value']
if 'PropagateAtLaunch' in tag and tag['PropagateAtLaunch']:
propagated_tags[tag['Key']] = tag['Value']
propagated_tags[ASG_NAME_TAG] = self.name
reservation = self.autoscaling_backend.ec2_backend.add_instances(
self.launch_config.image_id,
count_needed,
self.launch_config.user_data,
self.launch_config.security_groups,
instance_type=self.launch_config.instance_type,
tags={'instance': propagated_tags}
)
for instance in reservation.instances:
instance.autoscaling_group = self
self.instance_states.append(InstanceState(instance))
propagated_tags = self.get_propagated_tags()
self.replace_autoscaling_group_instances(count_needed, propagated_tags)
else:
# Need to remove some instances
count_to_remove = curr_instance_count - self.desired_capacity
@ -290,20 +279,51 @@ class FakeAutoScalingGroup(BaseModel):
instance_ids_to_remove)
self.instance_states = self.instance_states[count_to_remove:]
def get_propagated_tags(self):
propagated_tags = {}
for tag in self.tags:
# boto uses 'propagate_at_launch
# boto3 and cloudformation use PropagateAtLaunch
if 'propagate_at_launch' in tag and tag['propagate_at_launch'] == 'true':
propagated_tags[tag['key']] = tag['value']
if 'PropagateAtLaunch' in tag and tag['PropagateAtLaunch']:
propagated_tags[tag['Key']] = tag['Value']
return propagated_tags
def replace_autoscaling_group_instances(self, count_needed, propagated_tags):
propagated_tags[ASG_NAME_TAG] = self.name
reservation = self.autoscaling_backend.ec2_backend.add_instances(
self.launch_config.image_id,
count_needed,
self.launch_config.user_data,
self.launch_config.security_groups,
instance_type=self.launch_config.instance_type,
tags={'instance': propagated_tags}
)
for instance in reservation.instances:
instance.autoscaling_group = self
self.instance_states.append(InstanceState(instance))
def append_target_groups(self, target_group_arns):
append = [x for x in target_group_arns if x not in self.target_group_arns]
self.target_group_arns.extend(append)
class AutoScalingBackend(BaseBackend):
def __init__(self, ec2_backend, elb_backend):
def __init__(self, ec2_backend, elb_backend, elbv2_backend):
self.autoscaling_groups = OrderedDict()
self.launch_configurations = OrderedDict()
self.policies = {}
self.ec2_backend = ec2_backend
self.elb_backend = elb_backend
self.elbv2_backend = elbv2_backend
def reset(self):
ec2_backend = self.ec2_backend
elb_backend = self.elb_backend
elbv2_backend = self.elbv2_backend
self.__dict__ = {}
self.__init__(ec2_backend, elb_backend)
self.__init__(ec2_backend, elb_backend, elbv2_backend)
def create_launch_configuration(self, name, image_id, key_name, kernel_id, ramdisk_id,
security_groups, user_data, instance_type,
@ -338,12 +358,13 @@ class AutoScalingBackend(BaseBackend):
def delete_launch_configuration(self, launch_configuration_name):
self.launch_configurations.pop(launch_configuration_name, None)
def create_autoscaling_group(self, name, availability_zones,
def create_auto_scaling_group(self, name, availability_zones,
desired_capacity, max_size, min_size,
launch_config_name, vpc_zone_identifier,
default_cooldown, health_check_period,
health_check_type, load_balancers,
placement_group, termination_policies, tags):
target_group_arns, placement_group,
termination_policies, tags):
def make_int(value):
return int(value) if value is not None else value
@ -369,6 +390,7 @@ class AutoScalingBackend(BaseBackend):
health_check_period=health_check_period,
health_check_type=health_check_type,
load_balancers=load_balancers,
target_group_arns=target_group_arns,
placement_group=placement_group,
termination_policies=termination_policies,
autoscaling_backend=self,
@ -377,38 +399,79 @@ class AutoScalingBackend(BaseBackend):
self.autoscaling_groups[name] = group
self.update_attached_elbs(group.name)
self.update_attached_target_groups(group.name)
return group
def update_autoscaling_group(self, name, availability_zones,
def update_auto_scaling_group(self, name, availability_zones,
desired_capacity, max_size, min_size,
launch_config_name, vpc_zone_identifier,
default_cooldown, health_check_period,
health_check_type, load_balancers,
placement_group, termination_policies):
health_check_type, placement_group,
termination_policies):
group = self.autoscaling_groups[name]
group.update(availability_zones, desired_capacity, max_size,
min_size, launch_config_name, vpc_zone_identifier,
default_cooldown, health_check_period, health_check_type,
load_balancers, placement_group, termination_policies)
placement_group, termination_policies)
return group
def describe_autoscaling_groups(self, names):
def describe_auto_scaling_groups(self, names):
groups = self.autoscaling_groups.values()
if names:
return [group for group in groups if group.name in names]
else:
return list(groups)
def delete_autoscaling_group(self, group_name):
def delete_auto_scaling_group(self, group_name):
self.set_desired_capacity(group_name, 0)
self.autoscaling_groups.pop(group_name, None)
def describe_autoscaling_instances(self):
def describe_auto_scaling_instances(self):
instance_states = []
for group in self.autoscaling_groups.values():
instance_states.extend(group.instance_states)
return instance_states
def attach_instances(self, group_name, instance_ids):
group = self.autoscaling_groups[group_name]
original_size = len(group.instance_states)
if (original_size + len(instance_ids)) > group.max_size:
raise ResourceContentionError
else:
group.desired_capacity = original_size + len(instance_ids)
new_instances = [InstanceState(self.ec2_backend.get_instance(x)) for x in instance_ids]
for instance in new_instances:
self.ec2_backend.create_tags([instance.instance.id], {ASG_NAME_TAG: group.name})
group.instance_states.extend(new_instances)
self.update_attached_elbs(group.name)
def set_instance_health(self, instance_id, health_status, should_respect_grace_period):
instance = self.ec2_backend.get_instance(instance_id)
instance_state = next(instance_state for group in self.autoscaling_groups.values()
for instance_state in group.instance_states if instance_state.instance.id == instance.id)
instance_state.health_status = health_status
def detach_instances(self, group_name, instance_ids, should_decrement):
group = self.autoscaling_groups[group_name]
original_size = len(group.instance_states)
detached_instances = [x for x in group.instance_states if x.instance.id in instance_ids]
for instance in detached_instances:
self.ec2_backend.delete_tags([instance.instance.id], {ASG_NAME_TAG: group.name})
new_instance_state = [x for x in group.instance_states if x.instance.id not in instance_ids]
group.instance_states = new_instance_state
if should_decrement:
group.desired_capacity = original_size - len(instance_ids)
else:
count_needed = len(instance_ids)
group.replace_autoscaling_group_instances(count_needed, group.get_propagated_tags())
self.update_attached_elbs(group_name)
return detached_instances
def set_desired_capacity(self, group_name, desired_capacity):
group = self.autoscaling_groups[group_name]
group.set_desired_capacity(desired_capacity)
@ -461,6 +524,10 @@ class AutoScalingBackend(BaseBackend):
group_instance_ids = set(
state.instance.id for state in group.instance_states)
# skip this if group.load_balancers is empty
# otherwise elb_backend.describe_load_balancers returns all available load balancers
if not group.load_balancers:
return
try:
elbs = self.elb_backend.describe_load_balancers(
names=group.load_balancers)
@ -475,8 +542,25 @@ class AutoScalingBackend(BaseBackend):
self.elb_backend.deregister_instances(
elb.name, elb_instace_ids - group_instance_ids)
def create_or_update_tags(self, tags):
def update_attached_target_groups(self, group_name):
group = self.autoscaling_groups[group_name]
group_instance_ids = set(
state.instance.id for state in group.instance_states)
# no action necessary if target_group_arns is empty
if not group.target_group_arns:
return
target_groups = self.elbv2_backend.describe_target_groups(
target_group_arns=group.target_group_arns,
load_balancer_arn=None,
names=None)
for target_group in target_groups:
asg_targets = [{'id': x, 'port': target_group.port} for x in group_instance_ids]
self.elbv2_backend.register_targets(target_group.arn, (asg_targets))
def create_or_update_tags(self, tags):
for tag in tags:
group_name = tag["resource_id"]
group = self.autoscaling_groups[group_name]
@ -496,8 +580,42 @@ class AutoScalingBackend(BaseBackend):
group.tags = new_tags
def attach_load_balancers(self, group_name, load_balancer_names):
group = self.autoscaling_groups[group_name]
group.load_balancers.extend(
[x for x in load_balancer_names if x not in group.load_balancers])
self.update_attached_elbs(group_name)
def describe_load_balancers(self, group_name):
return self.autoscaling_groups[group_name].load_balancers
def detach_load_balancers(self, group_name, load_balancer_names):
group = self.autoscaling_groups[group_name]
group_instance_ids = set(
state.instance.id for state in group.instance_states)
elbs = self.elb_backend.describe_load_balancers(names=group.load_balancers)
for elb in elbs:
self.elb_backend.deregister_instances(
elb.name, group_instance_ids)
group.load_balancers = [x for x in group.load_balancers if x not in load_balancer_names]
def attach_load_balancer_target_groups(self, group_name, target_group_arns):
group = self.autoscaling_groups[group_name]
group.append_target_groups(target_group_arns)
self.update_attached_target_groups(group_name)
def describe_load_balancer_target_groups(self, group_name):
return self.autoscaling_groups[group_name].target_group_arns
def detach_load_balancer_target_groups(self, group_name, target_group_arns):
group = self.autoscaling_groups[group_name]
group.target_group_arns = [x for x in group.target_group_arns if x not in target_group_arns]
for target_group in target_group_arns:
asg_targets = [{'id': x.instance.id} for x in group.instance_states]
self.elbv2_backend.deregister_targets(target_group, (asg_targets))
autoscaling_backends = {}
for region, ec2_backend in ec2_backends.items():
autoscaling_backends[region] = AutoScalingBackend(
ec2_backend, elb_backends[region])
ec2_backend, elb_backends[region], elbv2_backends[region])

View File

@ -1,6 +1,7 @@
from __future__ import unicode_literals
from moto.core.responses import BaseResponse
from moto.core.utils import amz_crc32, amzn_request_id
from .models import autoscaling_backends
@ -66,7 +67,7 @@ class AutoScalingResponse(BaseResponse):
return template.render()
def create_auto_scaling_group(self):
self.autoscaling_backend.create_autoscaling_group(
self.autoscaling_backend.create_auto_scaling_group(
name=self._get_param('AutoScalingGroupName'),
availability_zones=self._get_multi_param(
'AvailabilityZones.member'),
@ -79,6 +80,7 @@ class AutoScalingResponse(BaseResponse):
health_check_period=self._get_int_param('HealthCheckGracePeriod'),
health_check_type=self._get_param('HealthCheckType'),
load_balancers=self._get_multi_param('LoadBalancerNames.member'),
target_group_arns=self._get_multi_param('TargetGroupARNs.member'),
placement_group=self._get_param('PlacementGroup'),
termination_policies=self._get_multi_param(
'TerminationPolicies.member'),
@ -87,10 +89,78 @@ class AutoScalingResponse(BaseResponse):
template = self.response_template(CREATE_AUTOSCALING_GROUP_TEMPLATE)
return template.render()
@amz_crc32
@amzn_request_id
def attach_instances(self):
group_name = self._get_param('AutoScalingGroupName')
instance_ids = self._get_multi_param('InstanceIds.member')
self.autoscaling_backend.attach_instances(
group_name, instance_ids)
template = self.response_template(ATTACH_INSTANCES_TEMPLATE)
return template.render()
@amz_crc32
@amzn_request_id
def set_instance_health(self):
instance_id = self._get_param('InstanceId')
health_status = self._get_param("HealthStatus")
if health_status not in ['Healthy', 'Unhealthy']:
raise ValueError('Valid instance health states are: [Healthy, Unhealthy]')
should_respect_grace_period = self._get_param("ShouldRespectGracePeriod")
self.autoscaling_backend.set_instance_health(instance_id, health_status, should_respect_grace_period)
template = self.response_template(SET_INSTANCE_HEALTH_TEMPLATE)
return template.render()
@amz_crc32
@amzn_request_id
def detach_instances(self):
group_name = self._get_param('AutoScalingGroupName')
instance_ids = self._get_multi_param('InstanceIds.member')
should_decrement_string = self._get_param('ShouldDecrementDesiredCapacity')
if should_decrement_string == 'true':
should_decrement = True
else:
should_decrement = False
detached_instances = self.autoscaling_backend.detach_instances(
group_name, instance_ids, should_decrement)
template = self.response_template(DETACH_INSTANCES_TEMPLATE)
return template.render(detached_instances=detached_instances)
@amz_crc32
@amzn_request_id
def attach_load_balancer_target_groups(self):
group_name = self._get_param('AutoScalingGroupName')
target_group_arns = self._get_multi_param('TargetGroupARNs.member')
self.autoscaling_backend.attach_load_balancer_target_groups(
group_name, target_group_arns)
template = self.response_template(ATTACH_LOAD_BALANCER_TARGET_GROUPS_TEMPLATE)
return template.render()
@amz_crc32
@amzn_request_id
def describe_load_balancer_target_groups(self):
group_name = self._get_param('AutoScalingGroupName')
target_group_arns = self.autoscaling_backend.describe_load_balancer_target_groups(
group_name)
template = self.response_template(DESCRIBE_LOAD_BALANCER_TARGET_GROUPS)
return template.render(target_group_arns=target_group_arns)
@amz_crc32
@amzn_request_id
def detach_load_balancer_target_groups(self):
group_name = self._get_param('AutoScalingGroupName')
target_group_arns = self._get_multi_param('TargetGroupARNs.member')
self.autoscaling_backend.detach_load_balancer_target_groups(
group_name, target_group_arns)
template = self.response_template(DETACH_LOAD_BALANCER_TARGET_GROUPS_TEMPLATE)
return template.render()
def describe_auto_scaling_groups(self):
names = self._get_multi_param("AutoScalingGroupNames.member")
token = self._get_param("NextToken")
all_groups = self.autoscaling_backend.describe_autoscaling_groups(names)
all_groups = self.autoscaling_backend.describe_auto_scaling_groups(names)
all_names = [group.name for group in all_groups]
if token:
start = all_names.index(token) + 1
@ -107,7 +177,7 @@ class AutoScalingResponse(BaseResponse):
return template.render(groups=groups, next_token=next_token)
def update_auto_scaling_group(self):
self.autoscaling_backend.update_autoscaling_group(
self.autoscaling_backend.update_auto_scaling_group(
name=self._get_param('AutoScalingGroupName'),
availability_zones=self._get_multi_param(
'AvailabilityZones.member'),
@ -119,7 +189,6 @@ class AutoScalingResponse(BaseResponse):
default_cooldown=self._get_int_param('DefaultCooldown'),
health_check_period=self._get_int_param('HealthCheckGracePeriod'),
health_check_type=self._get_param('HealthCheckType'),
load_balancers=self._get_multi_param('LoadBalancerNames.member'),
placement_group=self._get_param('PlacementGroup'),
termination_policies=self._get_multi_param(
'TerminationPolicies.member'),
@ -129,7 +198,7 @@ class AutoScalingResponse(BaseResponse):
def delete_auto_scaling_group(self):
group_name = self._get_param('AutoScalingGroupName')
self.autoscaling_backend.delete_autoscaling_group(group_name)
self.autoscaling_backend.delete_auto_scaling_group(group_name)
template = self.response_template(DELETE_AUTOSCALING_GROUP_TEMPLATE)
return template.render()
@ -149,7 +218,7 @@ class AutoScalingResponse(BaseResponse):
return template.render()
def describe_auto_scaling_instances(self):
instance_states = self.autoscaling_backend.describe_autoscaling_instances()
instance_states = self.autoscaling_backend.describe_auto_scaling_instances()
template = self.response_template(
DESCRIBE_AUTOSCALING_INSTANCES_TEMPLATE)
return template.render(instance_states=instance_states)
@ -186,6 +255,34 @@ class AutoScalingResponse(BaseResponse):
template = self.response_template(EXECUTE_POLICY_TEMPLATE)
return template.render()
@amz_crc32
@amzn_request_id
def attach_load_balancers(self):
group_name = self._get_param('AutoScalingGroupName')
load_balancer_names = self._get_multi_param("LoadBalancerNames.member")
self.autoscaling_backend.attach_load_balancers(
group_name, load_balancer_names)
template = self.response_template(ATTACH_LOAD_BALANCERS_TEMPLATE)
return template.render()
@amz_crc32
@amzn_request_id
def describe_load_balancers(self):
group_name = self._get_param('AutoScalingGroupName')
load_balancers = self.autoscaling_backend.describe_load_balancers(group_name)
template = self.response_template(DESCRIBE_LOAD_BALANCERS_TEMPLATE)
return template.render(load_balancers=load_balancers)
@amz_crc32
@amzn_request_id
def detach_load_balancers(self):
group_name = self._get_param('AutoScalingGroupName')
load_balancer_names = self._get_multi_param("LoadBalancerNames.member")
self.autoscaling_backend.detach_load_balancers(
group_name, load_balancer_names)
template = self.response_template(DETACH_LOAD_BALANCERS_TEMPLATE)
return template.render()
CREATE_LAUNCH_CONFIGURATION_TEMPLATE = """<CreateLaunchConfigurationResponse xmlns="http://autoscaling.amazonaws.com/doc/2011-01-01/">
<ResponseMetadata>
@ -217,7 +314,7 @@ DESCRIBE_LAUNCH_CONFIGURATIONS_TEMPLATE = """<DescribeLaunchConfigurationsRespon
{% endif %}
<InstanceType>{{ launch_configuration.instance_type }}</InstanceType>
<LaunchConfigurationARN>arn:aws:autoscaling:us-east-1:803981987763:launchConfiguration:
9dbbbf87-6141-428a-a409-0752edbe6cad:launchConfigurationName/my-test-lc</LaunchConfigurationARN>
9dbbbf87-6141-428a-a409-0752edbe6cad:launchConfigurationName/{{ launch_configuration.name }}</LaunchConfigurationARN>
{% if launch_configuration.block_device_mappings %}
<BlockDeviceMappings>
{% for mount_point, mapping in launch_configuration.block_device_mappings.items() %}
@ -284,6 +381,72 @@ CREATE_AUTOSCALING_GROUP_TEMPLATE = """<CreateAutoScalingGroupResponse xmlns="ht
</ResponseMetadata>
</CreateAutoScalingGroupResponse>"""
ATTACH_LOAD_BALANCER_TARGET_GROUPS_TEMPLATE = """<AttachLoadBalancerTargetGroupsResponse xmlns="http://autoscaling.amazonaws.com/doc/2011-01-01/">
<AttachLoadBalancerTargetGroupsResult>
</AttachLoadBalancerTargetGroupsResult>
<ResponseMetadata>
<RequestId>{{ requestid }}</RequestId>
</ResponseMetadata>
</AttachLoadBalancerTargetGroupsResponse>"""
ATTACH_INSTANCES_TEMPLATE = """<AttachInstancesResponse xmlns="http://autoscaling.amazonaws.com/doc/2011-01-01/">
<AttachInstancesResult>
</AttachInstancesResult>
<ResponseMetadata>
<RequestId>{{ requestid }}</RequestId>
</ResponseMetadata>
</AttachInstancesResponse>"""
DESCRIBE_LOAD_BALANCER_TARGET_GROUPS = """<DescribeLoadBalancerTargetGroupsResponse xmlns="http://autoscaling.amazonaws.com/doc/2011-01-01/">
<DescribeLoadBalancerTargetGroupsResult>
<LoadBalancerTargetGroups>
{% for arn in target_group_arns %}
<member>
<LoadBalancerTargetGroupARN>{{ arn }}</LoadBalancerTargetGroupARN>
<State>Added</State>
</member>
{% endfor %}
</LoadBalancerTargetGroups>
</DescribeLoadBalancerTargetGroupsResult>
<ResponseMetadata>
<RequestId>{{ requestid }}</RequestId>
</ResponseMetadata>
</DescribeLoadBalancerTargetGroupsResponse>"""
DETACH_INSTANCES_TEMPLATE = """<DetachInstancesResponse xmlns="http://autoscaling.amazonaws.com/doc/2011-01-01/">
<DetachInstancesResult>
<Activities>
{% for instance in detached_instances %}
<member>
<ActivityId>5091cb52-547a-47ce-a236-c9ccbc2cb2c9EXAMPLE</ActivityId>
<AutoScalingGroupName>{{ group_name }}</AutoScalingGroupName>
<Cause>
At 2017-10-15T15:55:21Z instance {{ instance.instance.id }} was detached in response to a user request.
</Cause>
<Description>Detaching EC2 instance: {{ instance.instance.id }}</Description>
<StartTime>2017-10-15T15:55:21Z</StartTime>
<EndTime>2017-10-15T15:55:21Z</EndTime>
<StatusCode>InProgress</StatusCode>
<StatusMessage>InProgress</StatusMessage>
<Progress>50</Progress>
<Details>details</Details>
</member>
{% endfor %}
</Activities>
</DetachInstancesResult>
<ResponseMetadata>
<RequestId>{{ requestid }}</RequestId>
</ResponseMetadata>
</DetachInstancesResponse>"""
DETACH_LOAD_BALANCER_TARGET_GROUPS_TEMPLATE = """<DetachLoadBalancerTargetGroupsResponse xmlns="http://autoscaling.amazonaws.com/doc/2011-01-01/">
<DetachLoadBalancerTargetGroupsResult>
</DetachLoadBalancerTargetGroupsResult>
<ResponseMetadata>
<RequestId>{{ requestid }}</RequestId>
</ResponseMetadata>
</DetachLoadBalancerTargetGroupsResponse>"""
DESCRIBE_AUTOSCALING_GROUPS_TEMPLATE = """<DescribeAutoScalingGroupsResponse xmlns="http://autoscaling.amazonaws.com/doc/2011-01-01/">
<DescribeAutoScalingGroupsResult>
<AutoScalingGroups>
@ -309,7 +472,7 @@ DESCRIBE_AUTOSCALING_GROUPS_TEMPLATE = """<DescribeAutoScalingGroupsResponse xml
<Instances>
{% for instance_state in group.instance_states %}
<member>
<HealthStatus>HEALTHY</HealthStatus>
<HealthStatus>{{ instance_state.health_status }}</HealthStatus>
<AvailabilityZone>us-east-1e</AvailabilityZone>
<InstanceId>{{ instance_state.instance.id }}</InstanceId>
<LaunchConfigurationName>{{ group.launch_config_name }}</LaunchConfigurationName>
@ -341,7 +504,7 @@ DESCRIBE_AUTOSCALING_GROUPS_TEMPLATE = """<DescribeAutoScalingGroupsResponse xml
<HealthCheckGracePeriod>{{ group.health_check_period }}</HealthCheckGracePeriod>
<DefaultCooldown>{{ group.default_cooldown }}</DefaultCooldown>
<AutoScalingGroupARN>arn:aws:autoscaling:us-east-1:803981987763:autoScalingGroup:ca861182-c8f9-4ca7-b1eb-cd35505f5ebb
:autoScalingGroupName/my-test-asg-lbs</AutoScalingGroupARN>
:autoScalingGroupName/{{ group.name }}</AutoScalingGroupARN>
{% if group.termination_policies %}
<TerminationPolicies>
{% for policy in group.termination_policies %}
@ -384,7 +547,7 @@ DESCRIBE_AUTOSCALING_INSTANCES_TEMPLATE = """<DescribeAutoScalingInstancesRespon
<AutoScalingInstances>
{% for instance_state in instance_states %}
<member>
<HealthStatus>HEALTHY</HealthStatus>
<HealthStatus>{{ instance_state.health_status }}</HealthStatus>
<AutoScalingGroupName>{{ instance_state.instance.autoscaling_group.name }}</AutoScalingGroupName>
<AvailabilityZone>us-east-1e</AvailabilityZone>
<InstanceId>{{ instance_state.instance.id }}</InstanceId>
@ -450,3 +613,40 @@ DELETE_POLICY_TEMPLATE = """<DeleteScalingPolicyResponse xmlns="http://autoscali
<RequestId>70a76d42-9665-11e2-9fdf-211deEXAMPLE</RequestId>
</ResponseMetadata>
</DeleteScalingPolicyResponse>"""
ATTACH_LOAD_BALANCERS_TEMPLATE = """<AttachLoadBalancersResponse xmlns="http://autoscaling.amazonaws.com/doc/2011-01-01/">
<AttachLoadBalancersResult></AttachLoadBalancersResult>
<ResponseMetadata>
<RequestId>{{ requestid }}</RequestId>
</ResponseMetadata>
</AttachLoadBalancersResponse>"""
DESCRIBE_LOAD_BALANCERS_TEMPLATE = """<DescribeLoadBalancersResponse xmlns="http://autoscaling.amazonaws.com/doc/2011-01-01/">
<DescribeLoadBalancersResult>
<LoadBalancers>
{% for load_balancer in load_balancers %}
<member>
<LoadBalancerName>{{ load_balancer }}</LoadBalancerName>
<State>Added</State>
</member>
{% endfor %}
</LoadBalancers>
</DescribeLoadBalancersResult>
<ResponseMetadata>
<RequestId>{{ requestid }}</RequestId>
</ResponseMetadata>
</DescribeLoadBalancersResponse>"""
DETACH_LOAD_BALANCERS_TEMPLATE = """<DetachLoadBalancersResponse xmlns="http://autoscaling.amazonaws.com/doc/2011-01-01/">
<DetachLoadBalancersResult></DetachLoadBalancersResult>
<ResponseMetadata>
<RequestId>{{ requestid }}</RequestId>
</ResponseMetadata>
</DetachLoadBalancersResponse>"""
SET_INSTANCE_HEALTH_TEMPLATE = """<SetInstanceHealthResponse xmlns="http://autoscaling.amazonaws.com/doc/2011-01-01/">
<SetInstanceHealthResponse></SetInstanceHealthResponse>
<ResponseMetadata>
<RequestId>{{ requestid }}</RequestId>
</ResponseMetadata>
</SetInstanceHealthResponse>"""

View File

@ -2,6 +2,7 @@ from __future__ import unicode_literals
import base64
from collections import defaultdict
import copy
import datetime
import docker.errors
import hashlib
@ -17,18 +18,23 @@ import tarfile
import calendar
import threading
import traceback
import weakref
import requests.adapters
import boto.awslambda
from moto.core import BaseBackend, BaseModel
from moto.core.exceptions import RESTError
from moto.core.utils import unix_time_millis
from moto.s3.models import s3_backend
from moto.logs.models import logs_backends
from moto.s3.exceptions import MissingBucket, MissingKey
from moto import settings
from .utils import make_function_arn
logger = logging.getLogger(__name__)
ACCOUNT_ID = '123456789012'
try:
from tempfile import TemporaryDirectory
@ -121,7 +127,7 @@ class _DockerDataVolumeContext:
class LambdaFunction(BaseModel):
def __init__(self, spec, region, validate_s3=True):
def __init__(self, spec, region, validate_s3=True, version=1):
# required
self.region = region
self.code = spec['Code']
@ -161,7 +167,7 @@ class LambdaFunction(BaseModel):
'VpcConfig', {'SubnetIds': [], 'SecurityGroupIds': []})
# auto-generated
self.version = '$LATEST'
self.version = version
self.last_modified = datetime.datetime.utcnow().strftime(
'%Y-%m-%d %H:%M:%S')
@ -203,11 +209,15 @@ class LambdaFunction(BaseModel):
self.code_size = key.size
self.code_sha_256 = hashlib.sha256(key.value).hexdigest()
self.function_arn = 'arn:aws:lambda:{}:123456789012:function:{}'.format(
self.region, self.function_name)
self.function_arn = make_function_arn(self.region, ACCOUNT_ID, self.function_name, version)
self.tags = dict()
def set_version(self, version):
self.function_arn = make_function_arn(self.region, ACCOUNT_ID, self.function_name, version)
self.version = version
self.last_modified = datetime.datetime.utcnow().strftime('%Y-%m-%d %H:%M:%S')
@property
def vpc_config(self):
config = self._vpc_config.copy()
@ -231,7 +241,7 @@ class LambdaFunction(BaseModel):
"Role": self.role,
"Runtime": self.run_time,
"Timeout": self.timeout,
"Version": self.version,
"Version": str(self.version),
"VpcConfig": self.vpc_config,
}
@ -298,7 +308,12 @@ class LambdaFunction(BaseModel):
volumes=["{}:/var/task".format(data_vol.name)], environment=env_vars, detach=True, **run_kwargs)
finally:
if container:
exit_code = container.wait()
try:
exit_code = container.wait(timeout=300)
except requests.exceptions.ReadTimeout:
exit_code = -1
container.stop()
container.kill()
output = container.logs(stdout=False, stderr=True)
output += container.logs(stdout=True, stderr=False)
container.remove()
@ -384,8 +399,7 @@ class LambdaFunction(BaseModel):
from moto.cloudformation.exceptions import \
UnformattedGetAttTemplateException
if attribute_name == 'Arn':
return 'arn:aws:lambda:{0}:123456789012:function:{1}'.format(
self.region, self.function_name)
return make_function_arn(self.region, ACCOUNT_ID, self.function_name)
raise UnformattedGetAttTemplateException()
@staticmethod
@ -441,9 +455,121 @@ class LambdaVersion(BaseModel):
return LambdaVersion(spec)
class LambdaStorage(object):
def __init__(self):
# Format 'func_name' {'alias': {}, 'versions': []}
self._functions = {}
self._arns = weakref.WeakValueDictionary()
def _get_latest(self, name):
return self._functions[name]['latest']
def _get_version(self, name, version):
index = version - 1
try:
return self._functions[name]['versions'][index]
except IndexError:
return None
def _get_alias(self, name, alias):
return self._functions[name]['alias'].get(alias, None)
def get_function(self, name, qualifier=None):
if name not in self._functions:
return None
if qualifier is None:
return self._get_latest(name)
try:
return self._get_version(name, int(qualifier))
except ValueError:
return self._functions[name]['latest']
def get_arn(self, arn):
return self._arns.get(arn, None)
def put_function(self, fn):
"""
:param fn: Function
:type fn: LambdaFunction
"""
if fn.function_name in self._functions:
self._functions[fn.function_name]['latest'] = fn
else:
self._functions[fn.function_name] = {
'latest': fn,
'versions': [],
'alias': weakref.WeakValueDictionary()
}
self._arns[fn.function_arn] = fn
def publish_function(self, name):
if name not in self._functions:
return None
if not self._functions[name]['latest']:
return None
new_version = len(self._functions[name]['versions']) + 1
fn = copy.copy(self._functions[name]['latest'])
fn.set_version(new_version)
self._functions[name]['versions'].append(fn)
return fn
def del_function(self, name, qualifier=None):
if name in self._functions:
if not qualifier:
# Something is still reffing this so delete all arns
latest = self._functions[name]['latest'].function_arn
del self._arns[latest]
for fn in self._functions[name]['versions']:
del self._arns[fn.function_arn]
del self._functions[name]
return True
elif qualifier == '$LATEST':
self._functions[name]['latest'] = None
# If theres no functions left
if not self._functions[name]['versions'] and not self._functions[name]['latest']:
del self._functions[name]
return True
else:
fn = self.get_function(name, qualifier)
if fn:
self._functions[name]['versions'].remove(fn)
# If theres no functions left
if not self._functions[name]['versions'] and not self._functions[name]['latest']:
del self._functions[name]
return True
return False
def all(self):
result = []
for function_group in self._functions.values():
if function_group['latest'] is not None:
result.append(function_group['latest'])
result.extend(function_group['versions'])
return result
class LambdaBackend(BaseBackend):
def __init__(self, region_name):
self._functions = {}
self._lambdas = LambdaStorage()
self.region_name = region_name
def reset(self):
@ -451,31 +577,31 @@ class LambdaBackend(BaseBackend):
self.__dict__ = {}
self.__init__(region_name)
def has_function(self, function_name):
return function_name in self._functions
def has_function_arn(self, function_arn):
return self.get_function_by_arn(function_arn) is not None
def create_function(self, spec):
fn = LambdaFunction(spec, self.region_name)
self._functions[fn.function_name] = fn
function_name = spec.get('FunctionName', None)
if function_name is None:
raise RESTError('InvalidParameterValueException', 'Missing FunctionName')
fn = LambdaFunction(spec, self.region_name, version='$LATEST')
self._lambdas.put_function(fn)
return fn
def get_function(self, function_name):
return self._functions[function_name]
def publish_function(self, function_name):
return self._lambdas.publish_function(function_name)
def get_function(self, function_name, qualifier=None):
return self._lambdas.get_function(function_name, qualifier)
def get_function_by_arn(self, function_arn):
for function in self._functions.values():
if function.function_arn == function_arn:
return function
return None
return self._lambdas.get_arn(function_arn)
def delete_function(self, function_name):
del self._functions[function_name]
def delete_function(self, function_name, qualifier=None):
return self._lambdas.del_function(function_name, qualifier)
def list_functions(self):
return self._functions.values()
return self._lambdas.all()
def send_message(self, function_name, message):
event = {
@ -510,23 +636,31 @@ class LambdaBackend(BaseBackend):
]
}
self._functions[function_name].invoke(json.dumps(event), {}, {})
self._functions[function_name][-1].invoke(json.dumps(event), {}, {})
pass
def list_tags(self, resource):
return self.get_function_by_arn(resource).tags
def tag_resource(self, resource, tags):
self.get_function_by_arn(resource).tags.update(tags)
fn = self.get_function_by_arn(resource)
if not fn:
return False
fn.tags.update(tags)
return True
def untag_resource(self, resource, tagKeys):
function = self.get_function_by_arn(resource)
for key in tagKeys:
try:
del function.tags[key]
except KeyError:
pass
# Don't care
fn = self.get_function_by_arn(resource)
if fn:
for key in tagKeys:
try:
del fn.tags[key]
except KeyError:
pass
# Don't care
return True
return False
def add_policy(self, function_name, policy):
self.get_function(function_name).policy = policy

View File

@ -5,14 +5,31 @@ import re
try:
from urllib import unquote
from urlparse import urlparse, parse_qs
except:
from urllib.parse import unquote, urlparse, parse_qs
from urllib.parse import unquote
from moto.core.utils import amz_crc32, amzn_request_id
from moto.core.responses import BaseResponse
from .models import lambda_backends
class LambdaResponse(BaseResponse):
@property
def json_body(self):
"""
:return: JSON
:rtype: dict
"""
return json.loads(self.body)
@property
def lambda_backend(self):
"""
Get backend
:return: Lambda Backend
:rtype: moto.awslambda.models.LambdaBackend
"""
return lambda_backends[self.region]
def root(self, request, full_url, headers):
self.setup_class(request, full_url, headers)
@ -32,6 +49,18 @@ class LambdaResponse(BaseResponse):
else:
raise ValueError("Cannot handle request")
def versions(self, request, full_url, headers):
self.setup_class(request, full_url, headers)
if request.method == 'GET':
# This is ListVersionByFunction
raise ValueError("Cannot handle request")
elif request.method == 'POST':
return self._publish_function(request, full_url, headers)
else:
raise ValueError("Cannot handle request")
@amz_crc32
@amzn_request_id
def invoke(self, request, full_url, headers):
self.setup_class(request, full_url, headers)
if request.method == 'POST':
@ -39,6 +68,8 @@ class LambdaResponse(BaseResponse):
else:
raise ValueError("Cannot handle request")
@amz_crc32
@amzn_request_id
def invoke_async(self, request, full_url, headers):
self.setup_class(request, full_url, headers)
if request.method == 'POST':
@ -88,13 +119,12 @@ class LambdaResponse(BaseResponse):
def _invoke(self, request, full_url):
response_headers = {}
lambda_backend = self.get_lambda_backend(full_url)
path = request.path if hasattr(request, 'path') else request.path_url
function_name = path.split('/')[-2]
function_name = self.path.rsplit('/', 2)[-2]
qualifier = self._get_param('qualifier')
if lambda_backend.has_function(function_name):
fn = lambda_backend.get_function(function_name)
fn = self.lambda_backend.get_function(function_name, qualifier)
if fn:
payload = fn.invoke(self.body, self.headers, response_headers)
response_headers['Content-Length'] = str(len(payload))
return 202, response_headers, payload
@ -103,66 +133,70 @@ class LambdaResponse(BaseResponse):
def _invoke_async(self, request, full_url):
response_headers = {}
lambda_backend = self.get_lambda_backend(full_url)
path = request.path if hasattr(request, 'path') else request.path_url
function_name = path.split('/')[-3]
if lambda_backend.has_function(function_name):
fn = lambda_backend.get_function(function_name)
fn.invoke(self.body, self.headers, response_headers)
response_headers['Content-Length'] = str(0)
return 202, response_headers, ""
function_name = self.path.rsplit('/', 3)[-3]
fn = self.lambda_backend.get_function(function_name, None)
if fn:
payload = fn.invoke(self.body, self.headers, response_headers)
response_headers['Content-Length'] = str(len(payload))
return 202, response_headers, payload
else:
return 404, response_headers, "{}"
def _list_functions(self, request, full_url, headers):
lambda_backend = self.get_lambda_backend(full_url)
return 200, {}, json.dumps({
"Functions": [fn.get_configuration() for fn in lambda_backend.list_functions()],
# "NextMarker": str(uuid.uuid4()),
})
result = {
'Functions': []
}
for fn in self.lambda_backend.list_functions():
json_data = fn.get_configuration()
result['Functions'].append(json_data)
return 200, {}, json.dumps(result)
def _create_function(self, request, full_url, headers):
lambda_backend = self.get_lambda_backend(full_url)
spec = json.loads(self.body)
try:
fn = lambda_backend.create_function(spec)
fn = self.lambda_backend.create_function(self.json_body)
except ValueError as e:
return 400, {}, json.dumps({"Error": {"Code": e.args[0], "Message": e.args[1]}})
else:
config = fn.get_configuration()
return 201, {}, json.dumps(config)
def _publish_function(self, request, full_url, headers):
function_name = self.path.rsplit('/', 2)[-2]
fn = self.lambda_backend.publish_function(function_name)
if fn:
config = fn.get_configuration()
return 200, {}, json.dumps(config)
else:
return 404, {}, "{}"
def _delete_function(self, request, full_url, headers):
lambda_backend = self.get_lambda_backend(full_url)
function_name = self.path.rsplit('/', 1)[-1]
qualifier = self._get_param('Qualifier', None)
path = request.path if hasattr(request, 'path') else request.path_url
function_name = path.split('/')[-1]
if lambda_backend.has_function(function_name):
lambda_backend.delete_function(function_name)
if self.lambda_backend.delete_function(function_name, qualifier):
return 204, {}, ""
else:
return 404, {}, "{}"
def _get_function(self, request, full_url, headers):
lambda_backend = self.get_lambda_backend(full_url)
function_name = self.path.rsplit('/', 1)[-1]
qualifier = self._get_param('Qualifier', None)
path = request.path if hasattr(request, 'path') else request.path_url
function_name = path.split('/')[-1]
fn = self.lambda_backend.get_function(function_name, qualifier)
if lambda_backend.has_function(function_name):
fn = lambda_backend.get_function(function_name)
if fn:
code = fn.get_code()
return 200, {}, json.dumps(code)
else:
return 404, {}, "{}"
def get_lambda_backend(self, full_url):
from moto.awslambda.models import lambda_backends
region = self._get_aws_region(full_url)
return lambda_backends[region]
def _get_aws_region(self, full_url):
region = re.search(self.region_regex, full_url)
if region:
@ -171,41 +205,27 @@ class LambdaResponse(BaseResponse):
return self.default_region
def _list_tags(self, request, full_url):
lambda_backend = self.get_lambda_backend(full_url)
function_arn = unquote(self.path.rsplit('/', 1)[-1])
path = request.path if hasattr(request, 'path') else request.path_url
function_arn = unquote(path.split('/')[-1])
if lambda_backend.has_function_arn(function_arn):
function = lambda_backend.get_function_by_arn(function_arn)
return 200, {}, json.dumps(dict(Tags=function.tags))
fn = self.lambda_backend.get_function_by_arn(function_arn)
if fn:
return 200, {}, json.dumps({'Tags': fn.tags})
else:
return 404, {}, "{}"
def _tag_resource(self, request, full_url):
lambda_backend = self.get_lambda_backend(full_url)
function_arn = unquote(self.path.rsplit('/', 1)[-1])
path = request.path if hasattr(request, 'path') else request.path_url
function_arn = unquote(path.split('/')[-1])
spec = json.loads(self.body)
if lambda_backend.has_function_arn(function_arn):
lambda_backend.tag_resource(function_arn, spec['Tags'])
if self.lambda_backend.tag_resource(function_arn, self.json_body['Tags']):
return 200, {}, "{}"
else:
return 404, {}, "{}"
def _untag_resource(self, request, full_url):
lambda_backend = self.get_lambda_backend(full_url)
function_arn = unquote(self.path.rsplit('/', 1)[-1])
tag_keys = self.querystring['tagKeys']
path = request.path if hasattr(request, 'path') else request.path_url
function_arn = unquote(path.split('/')[-1].split('?')[0])
tag_keys = parse_qs(urlparse(full_url).query)['tagKeys']
if lambda_backend.has_function_arn(function_arn):
lambda_backend.untag_resource(function_arn, tag_keys)
if self.lambda_backend.untag_resource(function_arn, tag_keys):
return 204, {}, "{}"
else:
return 404, {}, "{}"

View File

@ -10,6 +10,7 @@ response = LambdaResponse()
url_paths = {
'{0}/(?P<api_version>[^/]+)/functions/?$': response.root,
r'{0}/(?P<api_version>[^/]+)/functions/(?P<function_name>[\w_-]+)/?$': response.function,
r'{0}/(?P<api_version>[^/]+)/functions/(?P<function_name>[\w_-]+)/versions/?$': response.versions,
r'{0}/(?P<api_version>[^/]+)/functions/(?P<function_name>[\w_-]+)/invocations/?$': response.invoke,
r'{0}/(?P<api_version>[^/]+)/functions/(?P<function_name>[\w_-]+)/invoke-async/?$': response.invoke_async,
r'{0}/(?P<api_version>[^/]+)/tags/(?P<resource_arn>.+)': response.tag,

15
moto/awslambda/utils.py Normal file
View File

@ -0,0 +1,15 @@
from collections import namedtuple
ARN = namedtuple('ARN', ['region', 'account', 'function_name', 'version'])
def make_function_arn(region, account, name, version='1'):
return 'arn:aws:lambda:{0}:{1}:function:{2}:{3}'.format(region, account, name, version)
def split_function_arn(arn):
arn = arn.replace('arn:aws:lambda:')
region, account, _, name, version = arn.split(':')
return ARN(region, account, name, version)

View File

@ -35,11 +35,17 @@ from moto.sqs import sqs_backends
from moto.ssm import ssm_backends
from moto.sts import sts_backends
from moto.xray import xray_backends
from moto.iot import iot_backends
from moto.iotdata import iotdata_backends
from moto.batch import batch_backends
from moto.resourcegroupstaggingapi import resourcegroupstaggingapi_backends
BACKENDS = {
'acm': acm_backends,
'apigateway': apigateway_backends,
'autoscaling': autoscaling_backends,
'batch': batch_backends,
'cloudformation': cloudformation_backends,
'cloudwatch': cloudwatch_backends,
'datapipeline': datapipeline_backends,
@ -72,7 +78,10 @@ BACKENDS = {
'sts': sts_backends,
'route53': route53_backends,
'lambda': lambda_backends,
'xray': xray_backends
'xray': xray_backends,
'resourcegroupstaggingapi': resourcegroupstaggingapi_backends,
'iot': iot_backends,
'iot-data': iotdata_backends,
}

6
moto/batch/__init__.py Normal file
View File

@ -0,0 +1,6 @@
from __future__ import unicode_literals
from .models import batch_backends
from ..core.models import base_decorator
batch_backend = batch_backends['us-east-1']
mock_batch = base_decorator(batch_backends)

37
moto/batch/exceptions.py Normal file
View File

@ -0,0 +1,37 @@
from __future__ import unicode_literals
import json
class AWSError(Exception):
CODE = None
STATUS = 400
def __init__(self, message, code=None, status=None):
self.message = message
self.code = code if code is not None else self.CODE
self.status = status if status is not None else self.STATUS
def response(self):
return json.dumps({'__type': self.code, 'message': self.message}), dict(status=self.status)
class InvalidRequestException(AWSError):
CODE = 'InvalidRequestException'
class InvalidParameterValueException(AWSError):
CODE = 'InvalidParameterValue'
class ValidationError(AWSError):
CODE = 'ValidationError'
class InternalFailure(AWSError):
CODE = 'InternalFailure'
STATUS = 500
class ClientException(AWSError):
CODE = 'ClientException'
STATUS = 400

1042
moto/batch/models.py Normal file

File diff suppressed because it is too large Load Diff

296
moto/batch/responses.py Normal file
View File

@ -0,0 +1,296 @@
from __future__ import unicode_literals
from moto.core.responses import BaseResponse
from .models import batch_backends
from six.moves.urllib.parse import urlsplit
from .exceptions import AWSError
import json
class BatchResponse(BaseResponse):
def _error(self, code, message):
return json.dumps({'__type': code, 'message': message}), dict(status=400)
@property
def batch_backend(self):
"""
:return: Batch Backend
:rtype: moto.batch.models.BatchBackend
"""
return batch_backends[self.region]
@property
def json(self):
if self.body is None or self.body == '':
self._json = {}
elif not hasattr(self, '_json'):
try:
self._json = json.loads(self.body)
except json.JSONDecodeError:
print()
return self._json
def _get_param(self, param_name, if_none=None):
val = self.json.get(param_name)
if val is not None:
return val
return if_none
def _get_action(self):
# Return element after the /v1/*
return urlsplit(self.uri).path.lstrip('/').split('/')[1]
# CreateComputeEnvironment
def createcomputeenvironment(self):
compute_env_name = self._get_param('computeEnvironmentName')
compute_resource = self._get_param('computeResources')
service_role = self._get_param('serviceRole')
state = self._get_param('state')
_type = self._get_param('type')
try:
name, arn = self.batch_backend.create_compute_environment(
compute_environment_name=compute_env_name,
_type=_type, state=state,
compute_resources=compute_resource,
service_role=service_role
)
except AWSError as err:
return err.response()
result = {
'computeEnvironmentArn': arn,
'computeEnvironmentName': name
}
return json.dumps(result)
# DescribeComputeEnvironments
def describecomputeenvironments(self):
compute_environments = self._get_param('computeEnvironments')
max_results = self._get_param('maxResults') # Ignored, should be int
next_token = self._get_param('nextToken') # Ignored
envs = self.batch_backend.describe_compute_environments(compute_environments, max_results=max_results, next_token=next_token)
result = {'computeEnvironments': envs}
return json.dumps(result)
# DeleteComputeEnvironment
def deletecomputeenvironment(self):
compute_environment = self._get_param('computeEnvironment')
try:
self.batch_backend.delete_compute_environment(compute_environment)
except AWSError as err:
return err.response()
return ''
# UpdateComputeEnvironment
def updatecomputeenvironment(self):
compute_env_name = self._get_param('computeEnvironment')
compute_resource = self._get_param('computeResources')
service_role = self._get_param('serviceRole')
state = self._get_param('state')
try:
name, arn = self.batch_backend.update_compute_environment(
compute_environment_name=compute_env_name,
compute_resources=compute_resource,
service_role=service_role,
state=state
)
except AWSError as err:
return err.response()
result = {
'computeEnvironmentArn': arn,
'computeEnvironmentName': name
}
return json.dumps(result)
# CreateJobQueue
def createjobqueue(self):
compute_env_order = self._get_param('computeEnvironmentOrder')
queue_name = self._get_param('jobQueueName')
priority = self._get_param('priority')
state = self._get_param('state')
try:
name, arn = self.batch_backend.create_job_queue(
queue_name=queue_name,
priority=priority,
state=state,
compute_env_order=compute_env_order
)
except AWSError as err:
return err.response()
result = {
'jobQueueArn': arn,
'jobQueueName': name
}
return json.dumps(result)
# DescribeJobQueues
def describejobqueues(self):
job_queues = self._get_param('jobQueues')
max_results = self._get_param('maxResults') # Ignored, should be int
next_token = self._get_param('nextToken') # Ignored
queues = self.batch_backend.describe_job_queues(job_queues, max_results=max_results, next_token=next_token)
result = {'jobQueues': queues}
return json.dumps(result)
# UpdateJobQueue
def updatejobqueue(self):
compute_env_order = self._get_param('computeEnvironmentOrder')
queue_name = self._get_param('jobQueue')
priority = self._get_param('priority')
state = self._get_param('state')
try:
name, arn = self.batch_backend.update_job_queue(
queue_name=queue_name,
priority=priority,
state=state,
compute_env_order=compute_env_order
)
except AWSError as err:
return err.response()
result = {
'jobQueueArn': arn,
'jobQueueName': name
}
return json.dumps(result)
# DeleteJobQueue
def deletejobqueue(self):
queue_name = self._get_param('jobQueue')
self.batch_backend.delete_job_queue(queue_name)
return ''
# RegisterJobDefinition
def registerjobdefinition(self):
container_properties = self._get_param('containerProperties')
def_name = self._get_param('jobDefinitionName')
parameters = self._get_param('parameters')
retry_strategy = self._get_param('retryStrategy')
_type = self._get_param('type')
try:
name, arn, revision = self.batch_backend.register_job_definition(
def_name=def_name,
parameters=parameters,
_type=_type,
retry_strategy=retry_strategy,
container_properties=container_properties
)
except AWSError as err:
return err.response()
result = {
'jobDefinitionArn': arn,
'jobDefinitionName': name,
'revision': revision
}
return json.dumps(result)
# DeregisterJobDefinition
def deregisterjobdefinition(self):
queue_name = self._get_param('jobDefinition')
self.batch_backend.deregister_job_definition(queue_name)
return ''
# DescribeJobDefinitions
def describejobdefinitions(self):
job_def_name = self._get_param('jobDefinitionName')
job_def_list = self._get_param('jobDefinitions')
max_results = self._get_param('maxResults')
next_token = self._get_param('nextToken')
status = self._get_param('status')
job_defs = self.batch_backend.describe_job_definitions(job_def_name, job_def_list, status, max_results, next_token)
result = {'jobDefinitions': [job.describe() for job in job_defs]}
return json.dumps(result)
# SubmitJob
def submitjob(self):
container_overrides = self._get_param('containerOverrides')
depends_on = self._get_param('dependsOn')
job_def = self._get_param('jobDefinition')
job_name = self._get_param('jobName')
job_queue = self._get_param('jobQueue')
parameters = self._get_param('parameters')
retries = self._get_param('retryStrategy')
try:
name, job_id = self.batch_backend.submit_job(
job_name, job_def, job_queue,
parameters=parameters,
retries=retries,
depends_on=depends_on,
container_overrides=container_overrides
)
except AWSError as err:
return err.response()
result = {
'jobId': job_id,
'jobName': name,
}
return json.dumps(result)
# DescribeJobs
def describejobs(self):
jobs = self._get_param('jobs')
try:
return json.dumps({'jobs': self.batch_backend.describe_jobs(jobs)})
except AWSError as err:
return err.response()
# ListJobs
def listjobs(self):
job_queue = self._get_param('jobQueue')
job_status = self._get_param('jobStatus')
max_results = self._get_param('maxResults')
next_token = self._get_param('nextToken')
try:
jobs = self.batch_backend.list_jobs(job_queue, job_status, max_results, next_token)
except AWSError as err:
return err.response()
result = {'jobSummaryList': [{'jobId': job.job_id, 'jobName': job.job_name} for job in jobs]}
return json.dumps(result)
# TerminateJob
def terminatejob(self):
job_id = self._get_param('jobId')
reason = self._get_param('reason')
try:
self.batch_backend.terminate_job(job_id, reason)
except AWSError as err:
return err.response()
return ''
# CancelJob
def canceljob(self): # Theres some AWS semantics on the differences but for us they're identical ;-)
return self.terminatejob()

25
moto/batch/urls.py Normal file
View File

@ -0,0 +1,25 @@
from __future__ import unicode_literals
from .responses import BatchResponse
url_bases = [
"https?://batch.(.+).amazonaws.com",
]
url_paths = {
'{0}/v1/createcomputeenvironment$': BatchResponse.dispatch,
'{0}/v1/describecomputeenvironments$': BatchResponse.dispatch,
'{0}/v1/deletecomputeenvironment': BatchResponse.dispatch,
'{0}/v1/updatecomputeenvironment': BatchResponse.dispatch,
'{0}/v1/createjobqueue': BatchResponse.dispatch,
'{0}/v1/describejobqueues': BatchResponse.dispatch,
'{0}/v1/updatejobqueue': BatchResponse.dispatch,
'{0}/v1/deletejobqueue': BatchResponse.dispatch,
'{0}/v1/registerjobdefinition': BatchResponse.dispatch,
'{0}/v1/deregisterjobdefinition': BatchResponse.dispatch,
'{0}/v1/describejobdefinitions': BatchResponse.dispatch,
'{0}/v1/submitjob': BatchResponse.dispatch,
'{0}/v1/describejobs': BatchResponse.dispatch,
'{0}/v1/listjobs': BatchResponse.dispatch,
'{0}/v1/terminatejob': BatchResponse.dispatch,
'{0}/v1/canceljob': BatchResponse.dispatch,
}

22
moto/batch/utils.py Normal file
View File

@ -0,0 +1,22 @@
from __future__ import unicode_literals
def make_arn_for_compute_env(account_id, name, region_name):
return "arn:aws:batch:{0}:{1}:compute-environment/{2}".format(region_name, account_id, name)
def make_arn_for_job_queue(account_id, name, region_name):
return "arn:aws:batch:{0}:{1}:job-queue/{2}".format(region_name, account_id, name)
def make_arn_for_task_def(account_id, name, revision, region_name):
return "arn:aws:batch:{0}:{1}:job-definition/{2}:{3}".format(region_name, account_id, name, revision)
def lowercase_first_key(some_dict):
new_dict = {}
for key, value in some_dict.items():
new_key = key[0].lower() + key[1:]
new_dict[new_key] = value
return new_dict

View File

@ -8,12 +8,14 @@ import re
from moto.autoscaling import models as autoscaling_models
from moto.awslambda import models as lambda_models
from moto.batch import models as batch_models
from moto.cloudwatch import models as cloudwatch_models
from moto.datapipeline import models as datapipeline_models
from moto.dynamodb import models as dynamodb_models
from moto.ec2 import models as ec2_models
from moto.ecs import models as ecs_models
from moto.elb import models as elb_models
from moto.elbv2 import models as elbv2_models
from moto.iam import models as iam_models
from moto.kinesis import models as kinesis_models
from moto.kms import models as kms_models
@ -31,6 +33,9 @@ from boto.cloudformation.stack import Output
MODEL_MAP = {
"AWS::AutoScaling::AutoScalingGroup": autoscaling_models.FakeAutoScalingGroup,
"AWS::AutoScaling::LaunchConfiguration": autoscaling_models.FakeLaunchConfiguration,
"AWS::Batch::JobDefinition": batch_models.JobDefinition,
"AWS::Batch::JobQueue": batch_models.JobQueue,
"AWS::Batch::ComputeEnvironment": batch_models.ComputeEnvironment,
"AWS::DynamoDB::Table": dynamodb_models.Table,
"AWS::Kinesis::Stream": kinesis_models.Stream,
"AWS::Lambda::EventSourceMapping": lambda_models.EventSourceMapping,
@ -57,6 +62,9 @@ MODEL_MAP = {
"AWS::ECS::TaskDefinition": ecs_models.TaskDefinition,
"AWS::ECS::Service": ecs_models.Service,
"AWS::ElasticLoadBalancing::LoadBalancer": elb_models.FakeLoadBalancer,
"AWS::ElasticLoadBalancingV2::LoadBalancer": elbv2_models.FakeLoadBalancer,
"AWS::ElasticLoadBalancingV2::TargetGroup": elbv2_models.FakeTargetGroup,
"AWS::ElasticLoadBalancingV2::Listener": elbv2_models.FakeListener,
"AWS::DataPipeline::Pipeline": datapipeline_models.Pipeline,
"AWS::IAM::InstanceProfile": iam_models.InstanceProfile,
"AWS::IAM::Role": iam_models.Role,
@ -322,7 +330,7 @@ def parse_output(output_logical_id, output_json, resources_map):
output_json = clean_json(output_json, resources_map)
output = Output()
output.key = output_logical_id
output.value = output_json['Value']
output.value = clean_json(output_json['Value'], resources_map)
output.description = output_json.get('Description')
return output

View File

@ -19,10 +19,19 @@ class CloudFormationResponse(BaseResponse):
template_url_parts = urlparse(template_url)
if "localhost" in template_url:
bucket_name, key_name = template_url_parts.path.lstrip(
"/").split("/")
"/").split("/", 1)
else:
bucket_name = template_url_parts.netloc.split(".")[0]
key_name = template_url_parts.path.lstrip("/")
if template_url_parts.netloc.endswith('amazonaws.com') \
and template_url_parts.netloc.startswith('s3'):
# Handle when S3 url uses amazon url with bucket in path
# Also handles getting region as technically s3 is region'd
# region = template_url.netloc.split('.')[1]
bucket_name, key_name = template_url_parts.path.lstrip(
"/").split("/", 1)
else:
bucket_name = template_url_parts.netloc.split(".")[0]
key_name = template_url_parts.path.lstrip("/")
key = s3_backend.get_key(bucket_name, key_name)
return key.value.decode("utf-8")
@ -227,13 +236,13 @@ CREATE_STACK_RESPONSE_TEMPLATE = """<CreateStackResponse>
</CreateStackResponse>
"""
UPDATE_STACK_RESPONSE_TEMPLATE = """<UpdateStackResponse>
UPDATE_STACK_RESPONSE_TEMPLATE = """<UpdateStackResponse xmlns="http://cloudformation.amazonaws.com/doc/2010-05-15/">
<UpdateStackResult>
<StackId>{{ stack.stack_id }}</StackId>
</UpdateStackResult>
<ResponseMetadata>
<RequestId>b9b5b068-3a41-11e5-94eb-example</RequestId>
</ResponseMetadata>
<RequestId>b9b4b068-3a41-11e5-94eb-example</RequestId>
</ResponseMetadata>
</UpdateStackResponse>
"""
@ -399,16 +408,6 @@ GET_TEMPLATE_RESPONSE_TEMPLATE = """<GetTemplateResponse>
</GetTemplateResponse>"""
UPDATE_STACK_RESPONSE_TEMPLATE = """<UpdateStackResponse xmlns="http://cloudformation.amazonaws.com/doc/2010-05-15/">
<UpdateStackResult>
<StackId>{{ stack.stack_id }}</StackId>
</UpdateStackResult>
<ResponseMetadata>
<RequestId>b9b4b068-3a41-11e5-94eb-example</RequestId>
</ResponseMetadata>
</UpdateStackResponse>
"""
DELETE_STACK_RESPONSE_TEMPLATE = """<DeleteStackResponse>
<ResponseMetadata>
<RequestId>5ccc7dcd-744c-11e5-be70-example</RequestId>
@ -416,6 +415,7 @@ DELETE_STACK_RESPONSE_TEMPLATE = """<DeleteStackResponse>
</DeleteStackResponse>
"""
LIST_EXPORTS_RESPONSE = """<ListExportsResponse xmlns="http://cloudformation.amazonaws.com/doc/2010-05-15/">
<ListExportsResult>
<Exports>

View File

@ -1,4 +1,7 @@
import json
from moto.core import BaseBackend, BaseModel
from moto.core.exceptions import RESTError
import boto.ec2.cloudwatch
import datetime
@ -35,9 +38,26 @@ class FakeAlarm(BaseModel):
self.ok_actions = ok_actions
self.insufficient_data_actions = insufficient_data_actions
self.unit = unit
self.state_updated_timestamp = datetime.datetime.utcnow()
self.configuration_updated_timestamp = datetime.datetime.utcnow()
self.history = []
self.state_reason = ''
self.state_reason_data = '{}'
self.state = 'OK'
self.state_updated_timestamp = datetime.datetime.utcnow()
def update_state(self, reason, reason_data, state_value):
# History type, that then decides what the rest of the items are, can be one of ConfigurationUpdate | StateUpdate | Action
self.history.append(
('StateUpdate', self.state_reason, self.state_reason_data, self.state, self.state_updated_timestamp)
)
self.state_reason = reason
self.state_reason_data = reason_data
self.state = state_value
self.state_updated_timestamp = datetime.datetime.utcnow()
class MetricDatum(BaseModel):
@ -122,10 +142,8 @@ class CloudWatchBackend(BaseBackend):
if alarm.name in alarm_names
]
def get_alarms_by_state_value(self, state):
raise NotImplementedError(
"DescribeAlarm by state is not implemented in moto."
)
def get_alarms_by_state_value(self, target_state):
return filter(lambda alarm: alarm.state == target_state, self.alarms.values())
def delete_alarms(self, alarm_names):
for alarm_name in alarm_names:
@ -164,6 +182,21 @@ class CloudWatchBackend(BaseBackend):
def get_dashboard(self, dashboard):
return self.dashboards.get(dashboard)
def set_alarm_state(self, alarm_name, reason, reason_data, state_value):
try:
if reason_data is not None:
json.loads(reason_data)
except ValueError:
raise RESTError('InvalidFormat', 'StateReasonData is invalid JSON')
if alarm_name not in self.alarms:
raise RESTError('ResourceNotFound', 'Alarm {0} not found'.format(alarm_name), status=404)
if state_value not in ('OK', 'ALARM', 'INSUFFICIENT_DATA'):
raise RESTError('InvalidParameterValue', 'StateValue is not one of OK | ALARM | INSUFFICIENT_DATA')
self.alarms[alarm_name].update_state(reason, reason_data, state_value)
class LogGroup(BaseModel):

View File

@ -1,4 +1,5 @@
import json
from moto.core.utils import amzn_request_id
from moto.core.responses import BaseResponse
from .models import cloudwatch_backends
@ -13,6 +14,7 @@ class CloudWatchResponse(BaseResponse):
template = self.response_template(ERROR_RESPONSE_TEMPLATE)
return template.render(code=code, message=message), dict(status=status)
@amzn_request_id
def put_metric_alarm(self):
name = self._get_param('AlarmName')
namespace = self._get_param('Namespace')
@ -40,6 +42,7 @@ class CloudWatchResponse(BaseResponse):
template = self.response_template(PUT_METRIC_ALARM_TEMPLATE)
return template.render(alarm=alarm)
@amzn_request_id
def describe_alarms(self):
action_prefix = self._get_param('ActionPrefix')
alarm_name_prefix = self._get_param('AlarmNamePrefix')
@ -62,12 +65,14 @@ class CloudWatchResponse(BaseResponse):
template = self.response_template(DESCRIBE_ALARMS_TEMPLATE)
return template.render(alarms=alarms)
@amzn_request_id
def delete_alarms(self):
alarm_names = self._get_multi_param('AlarmNames.member')
self.cloudwatch_backend.delete_alarms(alarm_names)
template = self.response_template(DELETE_METRIC_ALARMS_TEMPLATE)
return template.render()
@amzn_request_id
def put_metric_data(self):
namespace = self._get_param('Namespace')
metric_data = []
@ -99,11 +104,13 @@ class CloudWatchResponse(BaseResponse):
template = self.response_template(PUT_METRIC_DATA_TEMPLATE)
return template.render()
@amzn_request_id
def list_metrics(self):
metrics = self.cloudwatch_backend.get_all_metrics()
template = self.response_template(LIST_METRICS_TEMPLATE)
return template.render(metrics=metrics)
@amzn_request_id
def delete_dashboards(self):
dashboards = self._get_multi_param('DashboardNames.member')
if dashboards is None:
@ -116,18 +123,23 @@ class CloudWatchResponse(BaseResponse):
template = self.response_template(DELETE_DASHBOARD_TEMPLATE)
return template.render()
@amzn_request_id
def describe_alarm_history(self):
raise NotImplementedError()
@amzn_request_id
def describe_alarms_for_metric(self):
raise NotImplementedError()
@amzn_request_id
def disable_alarm_actions(self):
raise NotImplementedError()
@amzn_request_id
def enable_alarm_actions(self):
raise NotImplementedError()
@amzn_request_id
def get_dashboard(self):
dashboard_name = self._get_param('DashboardName')
@ -138,9 +150,11 @@ class CloudWatchResponse(BaseResponse):
template = self.response_template(GET_DASHBOARD_TEMPLATE)
return template.render(dashboard=dashboard)
@amzn_request_id
def get_metric_statistics(self):
raise NotImplementedError()
@amzn_request_id
def list_dashboards(self):
prefix = self._get_param('DashboardNamePrefix', '')
@ -149,6 +163,7 @@ class CloudWatchResponse(BaseResponse):
template = self.response_template(LIST_DASHBOARD_RESPONSE)
return template.render(dashboards=dashboards)
@amzn_request_id
def put_dashboard(self):
name = self._get_param('DashboardName')
body = self._get_param('DashboardBody')
@ -163,14 +178,23 @@ class CloudWatchResponse(BaseResponse):
template = self.response_template(PUT_DASHBOARD_RESPONSE)
return template.render()
@amzn_request_id
def set_alarm_state(self):
raise NotImplementedError()
alarm_name = self._get_param('AlarmName')
reason = self._get_param('StateReason')
reason_data = self._get_param('StateReasonData')
state_value = self._get_param('StateValue')
self.cloudwatch_backend.set_alarm_state(alarm_name, reason, reason_data, state_value)
template = self.response_template(SET_ALARM_STATE_TEMPLATE)
return template.render()
PUT_METRIC_ALARM_TEMPLATE = """<PutMetricAlarmResponse xmlns="http://monitoring.amazonaws.com/doc/2010-08-01/">
<ResponseMetadata>
<RequestId>
2690d7eb-ed86-11dd-9877-6fad448a8419
{{ request_id }}
</RequestId>
</ResponseMetadata>
</PutMetricAlarmResponse>"""
@ -229,7 +253,7 @@ DESCRIBE_ALARMS_TEMPLATE = """<DescribeAlarmsResponse xmlns="http://monitoring.a
DELETE_METRIC_ALARMS_TEMPLATE = """<DeleteMetricAlarmResponse xmlns="http://monitoring.amazonaws.com/doc/2010-08-01/">
<ResponseMetadata>
<RequestId>
2690d7eb-ed86-11dd-9877-6fad448a8419
{{ request_id }}
</RequestId>
</ResponseMetadata>
</DeleteMetricAlarmResponse>"""
@ -237,7 +261,7 @@ DELETE_METRIC_ALARMS_TEMPLATE = """<DeleteMetricAlarmResponse xmlns="http://moni
PUT_METRIC_DATA_TEMPLATE = """<PutMetricDataResponse xmlns="http://monitoring.amazonaws.com/doc/2010-08-01/">
<ResponseMetadata>
<RequestId>
2690d7eb-ed86-11dd-9877-6fad448a8419
{{ request_id }}
</RequestId>
</ResponseMetadata>
</PutMetricDataResponse>"""
@ -271,7 +295,7 @@ PUT_DASHBOARD_RESPONSE = """<PutDashboardResponse xmlns="http://monitoring.amazo
<DashboardValidationMessages/>
</PutDashboardResult>
<ResponseMetadata>
<RequestId>44b1d4d8-9fa3-11e7-8ad3-41b86ac5e49e</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</PutDashboardResponse>"""
@ -289,14 +313,14 @@ LIST_DASHBOARD_RESPONSE = """<ListDashboardsResponse xmlns="http://monitoring.am
</DashboardEntries>
</ListDashboardsResult>
<ResponseMetadata>
<RequestId>c3773873-9fa5-11e7-b315-31fcc9275d62</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</ListDashboardsResponse>"""
DELETE_DASHBOARD_TEMPLATE = """<DeleteDashboardsResponse xmlns="http://monitoring.amazonaws.com/doc/2010-08-01/">
<DeleteDashboardsResult/>
<ResponseMetadata>
<RequestId>68d1dc8c-9faa-11e7-a694-df2715690df2</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</DeleteDashboardsResponse>"""
@ -307,16 +331,22 @@ GET_DASHBOARD_TEMPLATE = """<GetDashboardResponse xmlns="http://monitoring.amazo
<DashboardName>{{ dashboard.name }}</DashboardName>
</GetDashboardResult>
<ResponseMetadata>
<RequestId>e3c16bb0-9faa-11e7-b315-31fcc9275d62</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</GetDashboardResponse>
"""
SET_ALARM_STATE_TEMPLATE = """<SetAlarmStateResponse xmlns="http://monitoring.amazonaws.com/doc/2010-08-01/">
<ResponseMetadata>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</SetAlarmStateResponse>"""
ERROR_RESPONSE_TEMPLATE = """<ErrorResponse xmlns="http://monitoring.amazonaws.com/doc/2010-08-01/">
<Error>
<Type>Sender</Type>
<Code>{{ code }}</Code>
<Message>{{ message }}</Message>
</Error>
<RequestId>5e45fd1e-9fa3-11e7-b720-89e8821d38c4</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ErrorResponse>"""

View File

@ -34,6 +34,8 @@ ERROR_JSON_RESPONSE = u"""{
class RESTError(HTTPException):
code = 400
templates = {
'single_error': SINGLE_ERROR_RESPONSE,
'error': ERROR_RESPONSE,
@ -54,7 +56,6 @@ class DryRunClientError(RESTError):
class JsonRESTError(RESTError):
def __init__(self, error_type, message, template='error_json', **kwargs):
super(JsonRESTError, self).__init__(
error_type, message, template, **kwargs)

View File

@ -1,3 +1,4 @@
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from __future__ import absolute_import
@ -176,16 +177,49 @@ class ServerModeMockAWS(BaseMockAWS):
if 'endpoint_url' not in kwargs:
kwargs['endpoint_url'] = "http://localhost:5000"
return real_boto3_resource(*args, **kwargs)
def fake_httplib_send_output(self, message_body=None, *args, **kwargs):
def _convert_to_bytes(mixed_buffer):
bytes_buffer = []
for chunk in mixed_buffer:
if isinstance(chunk, six.text_type):
bytes_buffer.append(chunk.encode('utf-8'))
else:
bytes_buffer.append(chunk)
msg = b"\r\n".join(bytes_buffer)
return msg
self._buffer.extend((b"", b""))
msg = _convert_to_bytes(self._buffer)
del self._buffer[:]
if isinstance(message_body, bytes):
msg += message_body
message_body = None
self.send(msg)
# if self._expect_header_set:
# read, write, exc = select.select([self.sock], [], [self.sock], 1)
# if read:
# self._handle_expect_response(message_body)
# return
if message_body is not None:
self.send(message_body)
self._client_patcher = mock.patch('boto3.client', fake_boto3_client)
self._resource_patcher = mock.patch(
'boto3.resource', fake_boto3_resource)
self._resource_patcher = mock.patch('boto3.resource', fake_boto3_resource)
if six.PY2:
self._httplib_patcher = mock.patch('httplib.HTTPConnection._send_output', fake_httplib_send_output)
self._client_patcher.start()
self._resource_patcher.start()
if six.PY2:
self._httplib_patcher.start()
def disable_patching(self):
if self._client_patcher:
self._client_patcher.stop()
self._resource_patcher.stop()
if six.PY2:
self._httplib_patcher.stop()
class Model(type):

View File

@ -17,6 +17,8 @@ from six.moves.urllib.parse import parse_qs, urlparse
import xmltodict
from pkg_resources import resource_filename
from werkzeug.exceptions import HTTPException
import boto3
from moto.compat import OrderedDict
from moto.core.utils import camelcase_to_underscores, method_names_from_class
@ -103,7 +105,8 @@ class _TemplateEnvironmentMixin(object):
class BaseResponse(_TemplateEnvironmentMixin):
default_region = 'us-east-1'
region_regex = r'\.(.+?)\.amazonaws\.com'
# to extract region, use [^.]
region_regex = r'\.(?P<region>[a-z]{2}-[a-z]+-\d{1})\.amazonaws\.com'
aws_service_spec = None
@classmethod
@ -151,12 +154,12 @@ class BaseResponse(_TemplateEnvironmentMixin):
querystring.update(headers)
querystring = _decode_dict(querystring)
self.uri = full_url
self.path = urlparse(full_url).path
self.querystring = querystring
self.method = request.method
self.region = self.get_region_from_url(request, full_url)
self.uri_match = None
self.headers = request.headers
if 'host' not in self.headers:
@ -178,6 +181,58 @@ class BaseResponse(_TemplateEnvironmentMixin):
self.setup_class(request, full_url, headers)
return self.call_action()
def uri_to_regexp(self, uri):
"""converts uri w/ placeholder to regexp
'/cars/{carName}/drivers/{DriverName}'
-> '^/cars/.*/drivers/[^/]*$'
'/cars/{carName}/drivers/{DriverName}/drive'
-> '^/cars/.*/drivers/.*/drive$'
"""
def _convert(elem, is_last):
if not re.match('^{.*}$', elem):
return elem
name = elem.replace('{', '').replace('}', '')
if is_last:
return '(?P<%s>[^/]*)' % name
return '(?P<%s>.*)' % name
elems = uri.split('/')
num_elems = len(elems)
regexp = '^{}$'.format('/'.join([_convert(elem, (i == num_elems - 1)) for i, elem in enumerate(elems)]))
return regexp
def _get_action_from_method_and_request_uri(self, method, request_uri):
"""basically used for `rest-json` APIs
You can refer to example from link below
https://github.com/boto/botocore/blob/develop/botocore/data/iot/2015-05-28/service-2.json
"""
# service response class should have 'SERVICE_NAME' class member,
# if you want to get action from method and url
if not hasattr(self, 'SERVICE_NAME'):
return None
service = self.SERVICE_NAME
conn = boto3.client(service, region_name=self.region)
# make cache if it does not exist yet
if not hasattr(self, 'method_urls'):
self.method_urls = defaultdict(lambda: defaultdict(str))
op_names = conn._service_model.operation_names
for op_name in op_names:
op_model = conn._service_model.operation_model(op_name)
_method = op_model.http['method']
uri_regexp = self.uri_to_regexp(op_model.http['requestUri'])
self.method_urls[_method][uri_regexp] = op_model.name
regexp_and_names = self.method_urls[method]
for regexp, name in regexp_and_names.items():
match = re.match(regexp, request_uri)
self.uri_match = match
if match:
return name
return None
def _get_action(self):
action = self.querystring.get('Action', [""])[0]
if not action: # Some services use a header for the action
@ -186,7 +241,9 @@ class BaseResponse(_TemplateEnvironmentMixin):
'x-amz-target') or self.headers.get('X-Amz-Target')
if match:
action = match.split(".")[-1]
# get action from method and uri
if not action:
return self._get_action_from_method_and_request_uri(self.method, self.path)
return action
def call_action(self):
@ -199,10 +256,14 @@ class BaseResponse(_TemplateEnvironmentMixin):
response = method()
except HTTPException as http_error:
response = http_error.description, dict(status=http_error.code)
if isinstance(response, six.string_types):
return 200, headers, response
else:
body, new_headers = response
if len(response) == 2:
body, new_headers = response
else:
status, new_headers, body = response
status = new_headers.get('status', 200)
headers.update(new_headers)
# Cast status to string
@ -217,6 +278,22 @@ class BaseResponse(_TemplateEnvironmentMixin):
val = self.querystring.get(param_name)
if val is not None:
return val[0]
# try to get json body parameter
if self.body is not None:
try:
return json.loads(self.body)[param_name]
except ValueError:
pass
except KeyError:
pass
# try to get path parameter
if self.uri_match:
try:
return self.uri_match.group(param_name)
except IndexError:
# do nothing if param is not found
pass
return if_none
def _get_int_param(self, param_name, if_none=None):

View File

@ -1,10 +1,16 @@
from __future__ import unicode_literals
from functools import wraps
import binascii
import datetime
import inspect
import random
import re
import six
import string
REQUEST_ID_LONG = string.digits + string.ascii_uppercase
def camelcase_to_underscores(argument):
@ -194,3 +200,87 @@ def unix_time(dt=None):
def unix_time_millis(dt=None):
return unix_time(dt) * 1000.0
def gen_amz_crc32(response, headerdict=None):
if not isinstance(response, bytes):
response = response.encode()
crc = str(binascii.crc32(response))
if headerdict is not None and isinstance(headerdict, dict):
headerdict.update({'x-amz-crc32': crc})
return crc
def gen_amzn_requestid_long(headerdict=None):
req_id = ''.join([random.choice(REQUEST_ID_LONG) for _ in range(0, 52)])
if headerdict is not None and isinstance(headerdict, dict):
headerdict.update({'x-amzn-requestid': req_id})
return req_id
def amz_crc32(f):
@wraps(f)
def _wrapper(*args, **kwargs):
response = f(*args, **kwargs)
headers = {}
status = 200
if isinstance(response, six.string_types):
body = response
else:
if len(response) == 2:
body, new_headers = response
status = new_headers.get('status', 200)
else:
status, new_headers, body = response
headers.update(new_headers)
# Cast status to string
if "status" in headers:
headers['status'] = str(headers['status'])
try:
# Doesnt work on python2 for some odd unicode strings
gen_amz_crc32(body, headers)
except Exception:
pass
return status, headers, body
return _wrapper
def amzn_request_id(f):
@wraps(f)
def _wrapper(*args, **kwargs):
response = f(*args, **kwargs)
headers = {}
status = 200
if isinstance(response, six.string_types):
body = response
else:
if len(response) == 2:
body, new_headers = response
status = new_headers.get('status', 200)
else:
status, new_headers, body = response
headers.update(new_headers)
request_id = gen_amzn_requestid_long(headers)
# Update request ID in XML
try:
body = body.replace('{{ requestid }}', request_id)
except Exception: # Will just ignore if it cant work on bytes (which are str's on python2)
pass
return status, headers, body
return _wrapper

View File

@ -1,6 +1,7 @@
from __future__ import unicode_literals
from .models import dynamodb_backend2
from .models import dynamodb_backends as dynamodb_backends2
from ..core.models import base_decorator, deprecated_base_decorator
dynamodb_backends2 = {"global": dynamodb_backend2}
mock_dynamodb2 = dynamodb_backend2.decorator
mock_dynamodb2_deprecated = dynamodb_backend2.deprecated_decorator
dynamodb_backend2 = dynamodb_backends2['us-east-1']
mock_dynamodb2 = base_decorator(dynamodb_backends2)
mock_dynamodb2_deprecated = deprecated_base_decorator(dynamodb_backends2)

View File

@ -43,16 +43,14 @@ def get_comparison_func(range_comparison):
return COMPARISON_FUNCS.get(range_comparison)
#
class RecursionStopIteration(StopIteration):
pass
def get_filter_expression(expr, names, values):
# Examples
# expr = 'Id > 5 AND attribute_exists(test) AND Id BETWEEN 5 AND 6 OR length < 6 AND contains(test, 1) AND 5 IN (4,5, 6) OR (Id < 5 AND 5 > Id)'
# expr = 'Id > 5 AND Subs < 7'
# Need to do some dodgyness for NOT i think.
if 'NOT' in expr:
raise NotImplementedError('NOT not supported yet')
if names is None:
names = {}
if values is None:
@ -61,16 +59,28 @@ def get_filter_expression(expr, names, values):
# Do substitutions
for key, value in names.items():
expr = expr.replace(key, value)
# Store correct types of values for use later
values_map = {}
for key, value in values.items():
if 'N' in value:
expr.replace(key, float(value['N']))
values_map[key] = float(value['N'])
elif 'BOOL' in value:
values_map[key] = value['BOOL']
elif 'S' in value:
values_map[key] = value['S']
elif 'NS' in value:
values_map[key] = tuple(value['NS'])
elif 'SS' in value:
values_map[key] = tuple(value['SS'])
elif 'L' in value:
values_map[key] = tuple(value['L'])
else:
expr = expr.replace(key, value['S'])
raise NotImplementedError()
# Remove all spaces, tbf we could just skip them in the next step.
# The number of known options is really small so we can do a fair bit of cheating
#expr = list(re.sub('\s', '', expr)) # 'Id>5ANDattribute_exists(test)ORNOTlength<6'
expr = list(expr)
expr = list(expr.strip())
# DodgyTokenisation stage 1
def is_value(val):
@ -122,39 +132,42 @@ def get_filter_expression(expr, names, values):
return val in ('<', '>', '=', '>=', '<=', '<>', 'BETWEEN', 'IN', 'AND', 'OR', 'NOT')
# DodgyTokenisation stage 2, it groups together some elements to make RPN'ing it later easier.
tokens2 = []
token_iterator = iter(tokens)
for token in token_iterator:
if token == '(':
tuple_list = []
def handle_token(token, tokens2, token_iterator):
# ok so this essentially groups up some tokens to make later parsing easier,
# when it encounters brackets it will recurse and then unrecurse when RecursionStopIteration is raised.
if token == ')':
raise RecursionStopIteration() # Should be recursive so this should work
elif token == '(':
temp_list = []
next_token = six.next(token_iterator)
while next_token != ')':
try:
next_token = int(next_token)
except ValueError:
try:
next_token = float(next_token)
except ValueError:
pass
tuple_list.append(next_token)
next_token = six.next(token_iterator)
try:
while True:
next_token = six.next(token_iterator)
handle_token(next_token, temp_list, token_iterator)
except RecursionStopIteration:
pass # Continue
except StopIteration:
ValueError('Malformed filter expression, type1')
# Sigh, we only want to group a tuple if it doesnt contain operators
if any([is_op(item) for item in tuple_list]):
if any([is_op(item) for item in temp_list]):
# Its an expression
tokens2.append('(')
tokens2.extend(tuple_list)
tokens2.extend(temp_list)
tokens2.append(')')
else:
tokens2.append(tuple(tuple_list))
tokens2.append(tuple(temp_list))
elif token == 'BETWEEN':
field = tokens2.pop()
op1 = int(six.next(token_iterator))
# if values map contains a number, it would be a float
# so we need to int() it anyway
op1 = six.next(token_iterator)
op1 = int(values_map.get(op1, op1))
and_op = six.next(token_iterator)
assert and_op == 'AND'
op2 = int(six.next(token_iterator))
op2 = six.next(token_iterator)
op2 = int(values_map.get(op2, op2))
tokens2.append(['between', field, op1, op2])
elif is_function(token):
function_list = [token]
@ -167,16 +180,21 @@ def get_filter_expression(expr, names, values):
next_token = six.next(token_iterator)
tokens2.append(function_list)
else:
try:
token = int(token)
except ValueError:
try:
token = float(token)
except ValueError:
pass
tokens2.append(token)
# Convert tokens back to real types
if token in values_map:
token = values_map[token]
# Need to join >= <= <>
if len(tokens2) > 0 and ((tokens2[-1] == '>' and token == '=') or (tokens2[-1] == '<' and token == '=') or (tokens2[-1] == '<' and token == '>')):
tokens2.append(tokens2.pop() + token)
else:
tokens2.append(token)
tokens2 = []
token_iterator = iter(tokens)
for token in token_iterator:
handle_token(token, tokens2, token_iterator)
# Start of the Shunting-Yard algorithm. <-- Proper beast algorithm!
def is_number(val):
@ -205,7 +223,9 @@ def get_filter_expression(expr, names, values):
output.append(token)
else:
# Must be operator kw
while len(op_stack) > 0 and OPS[op_stack[-1]] <= OPS[token]:
# Cheat, NOT is our only RIGHT associative operator, should really have dict of operator associativity
while len(op_stack) > 0 and OPS[op_stack[-1]] <= OPS[token] and op_stack[-1] != 'NOT':
output.append(op_stack.pop())
op_stack.append(token)
while len(op_stack) > 0:
@ -229,17 +249,22 @@ def get_filter_expression(expr, names, values):
stack = []
for token in output:
if is_op(token):
op2 = stack.pop()
op1 = stack.pop()
op_cls = OP_CLASS[token]
if token == 'NOT':
op1 = stack.pop()
op2 = True
else:
op2 = stack.pop()
op1 = stack.pop()
stack.append(op_cls(op1, op2))
else:
stack.append(to_func(token))
result = stack.pop(0)
if len(stack) > 0:
raise ValueError('Malformed filter expression')
raise ValueError('Malformed filter expression, type2')
return result
@ -300,6 +325,18 @@ class Func(object):
return 'Func(...)'.format(self.FUNC)
class OpNot(Op):
OP = 'NOT'
def expr(self, item):
lhs = self._lhs(item)
return not lhs
def __str__(self):
return '({0} {1})'.format(self.OP, self.lhs)
class OpAnd(Op):
OP = 'AND'
@ -470,6 +507,7 @@ class FuncBetween(Func):
OP_CLASS = {
'NOT': OpNot,
'AND': OpAnd,
'OR': OpOr,
'IN': OpIn,

View File

@ -1,13 +1,16 @@
from __future__ import unicode_literals
from collections import defaultdict
import copy
import datetime
import decimal
import json
import re
import boto3
from moto.compat import OrderedDict
from moto.core import BaseBackend, BaseModel
from moto.core.utils import unix_time
from moto.core.exceptions import JsonRESTError
from .comparisons import get_comparison_func, get_filter_expression, Op
@ -146,9 +149,38 @@ class Item(BaseModel):
key = key.strip()
value = value.strip()
if value in expression_attribute_values:
self.attrs[key] = DynamoType(expression_attribute_values[value])
value = DynamoType(expression_attribute_values[value])
else:
self.attrs[key] = DynamoType({"S": value})
value = DynamoType({"S": value})
if '.' not in key:
self.attrs[key] = value
else:
# Handle nested dict updates
key_parts = key.split('.')
attr = key_parts.pop(0)
if attr not in self.attrs:
raise ValueError()
last_val = self.attrs[attr].value
for key_part in key_parts:
# Hack but it'll do, traverses into a dict
if list(last_val.keys())[0] == 'M':
last_val = last_val['M']
if key_part not in last_val:
raise ValueError()
last_val = last_val[key_part]
# We have reference to a nested object but we cant just assign to it
current_type = list(last_val.keys())[0]
if current_type == value.type:
last_val[current_type] = value.value
else:
last_val[value.type] = value.value
del last_val[current_type]
elif action == 'ADD':
key, value = value.split(" ", 1)
key = key.strip()
@ -271,6 +303,10 @@ class Table(BaseModel):
self.items = defaultdict(dict)
self.table_arn = self._generate_arn(table_name)
self.tags = []
self.ttl = {
'TimeToLiveStatus': 'DISABLED' # One of 'ENABLING'|'DISABLING'|'ENABLED'|'DISABLED',
# 'AttributeName': 'string' # Can contain this
}
def _generate_arn(self, name):
return 'arn:aws:dynamodb:us-east-1:123456789011:table/' + name
@ -413,7 +449,7 @@ class Table(BaseModel):
def query(self, hash_key, range_comparison, range_objs, limit,
exclusive_start_key, scan_index_forward, projection_expression,
index_name=None, **filter_kwargs):
index_name=None, filter_expression=None, **filter_kwargs):
results = []
if index_name:
all_indexes = (self.global_indexes or []) + (self.indexes or [])
@ -486,7 +522,8 @@ class Table(BaseModel):
if projection_expression:
expressions = [x.strip() for x in projection_expression.split(',')]
for result in possible_results:
results = copy.deepcopy(results)
for result in results:
for attr in list(result.attrs):
if attr not in expressions:
result.attrs.pop(attr)
@ -496,6 +533,9 @@ class Table(BaseModel):
scanned_count = len(list(self.all_items()))
if filter_expression is not None:
results = [item for item in results if filter_expression.expr(item)]
results, last_evaluated_key = self._trim_results(results, limit,
exclusive_start_key)
return results, scanned_count, last_evaluated_key
@ -577,9 +617,16 @@ class Table(BaseModel):
class DynamoDBBackend(BaseBackend):
def __init__(self):
def __init__(self, region_name=None):
self.region_name = region_name
self.tables = OrderedDict()
def reset(self):
region_name = self.region_name
self.__dict__ = {}
self.__init__(region_name)
def create_table(self, name, **params):
if name in self.tables:
return None
@ -595,6 +642,11 @@ class DynamoDBBackend(BaseBackend):
if self.tables[table].table_arn == table_arn:
self.tables[table].tags.extend(tags)
def untag_resource(self, table_arn, tag_keys):
for table in self.tables:
if self.tables[table].table_arn == table_arn:
self.tables[table].tags = [tag for tag in self.tables[table].tags if tag['Key'] not in tag_keys]
def list_tags_of_resource(self, table_arn):
required_table = None
for table in self.tables:
@ -689,7 +741,9 @@ class DynamoDBBackend(BaseBackend):
return table.get_item(hash_key, range_key)
def query(self, table_name, hash_key_dict, range_comparison, range_value_dicts,
limit, exclusive_start_key, scan_index_forward, projection_expression, index_name=None, **filter_kwargs):
limit, exclusive_start_key, scan_index_forward, projection_expression, index_name=None,
expr_names=None, expr_values=None, filter_expression=None,
**filter_kwargs):
table = self.tables.get(table_name)
if not table:
return None, None
@ -698,8 +752,13 @@ class DynamoDBBackend(BaseBackend):
range_values = [DynamoType(range_value)
for range_value in range_value_dicts]
if filter_expression is not None:
filter_expression = get_filter_expression(filter_expression, expr_names, expr_values)
else:
filter_expression = Op(None, None) # Will always eval to true
return table.query(hash_key, range_comparison, range_values, limit,
exclusive_start_key, scan_index_forward, projection_expression, index_name, **filter_kwargs)
exclusive_start_key, scan_index_forward, projection_expression, index_name, filter_expression, **filter_kwargs)
def scan(self, table_name, filters, limit, exclusive_start_key, filter_expression, expr_names, expr_values):
table = self.tables.get(table_name)
@ -796,5 +855,28 @@ class DynamoDBBackend(BaseBackend):
hash_key, range_key = self.get_keys_value(table, keys)
return table.delete_item(hash_key, range_key)
def update_ttl(self, table_name, ttl_spec):
table = self.tables.get(table_name)
if table is None:
raise JsonRESTError('ResourceNotFound', 'Table not found')
dynamodb_backend2 = DynamoDBBackend()
if 'Enabled' not in ttl_spec or 'AttributeName' not in ttl_spec:
raise JsonRESTError('InvalidParameterValue',
'TimeToLiveSpecification does not contain Enabled and AttributeName')
if ttl_spec['Enabled']:
table.ttl['TimeToLiveStatus'] = 'ENABLED'
else:
table.ttl['TimeToLiveStatus'] = 'DISABLED'
table.ttl['AttributeName'] = ttl_spec['AttributeName']
def describe_ttl(self, table_name):
table = self.tables.get(table_name)
if table is None:
raise JsonRESTError('ResourceNotFound', 'Table not found')
return table.ttl
available_regions = boto3.session.Session().get_available_regions("dynamodb")
dynamodb_backends = {region: DynamoDBBackend(region_name=region) for region in available_regions}

View File

@ -4,8 +4,8 @@ import six
import re
from moto.core.responses import BaseResponse
from moto.core.utils import camelcase_to_underscores
from .models import dynamodb_backend2, dynamo_json_dump
from moto.core.utils import camelcase_to_underscores, amzn_request_id
from .models import dynamodb_backends, dynamo_json_dump
class DynamoHandler(BaseResponse):
@ -24,6 +24,15 @@ class DynamoHandler(BaseResponse):
def error(self, type_, message, status=400):
return status, self.response_headers, dynamo_json_dump({'__type': type_, 'message': message})
@property
def dynamodb_backend(self):
"""
:return: DynamoDB2 Backend
:rtype: moto.dynamodb2.models.DynamoDBBackend
"""
return dynamodb_backends[self.region]
@amzn_request_id
def call_action(self):
self.body = json.loads(self.body or '{}')
endpoint = self.get_endpoint_name(self.headers)
@ -45,10 +54,10 @@ class DynamoHandler(BaseResponse):
limit = body.get('Limit', 100)
if body.get("ExclusiveStartTableName"):
last = body.get("ExclusiveStartTableName")
start = list(dynamodb_backend2.tables.keys()).index(last) + 1
start = list(self.dynamodb_backend.tables.keys()).index(last) + 1
else:
start = 0
all_tables = list(dynamodb_backend2.tables.keys())
all_tables = list(self.dynamodb_backend.tables.keys())
if limit:
tables = all_tables[start:start + limit]
else:
@ -56,6 +65,7 @@ class DynamoHandler(BaseResponse):
response = {"TableNames": tables}
if limit and len(all_tables) > start + limit:
response["LastEvaluatedTableName"] = tables[-1]
return dynamo_json_dump(response)
def create_table(self):
@ -72,12 +82,12 @@ class DynamoHandler(BaseResponse):
global_indexes = body.get("GlobalSecondaryIndexes", [])
local_secondary_indexes = body.get("LocalSecondaryIndexes", [])
table = dynamodb_backend2.create_table(table_name,
schema=key_schema,
throughput=throughput,
attr=attr,
global_indexes=global_indexes,
indexes=local_secondary_indexes)
table = self.dynamodb_backend.create_table(table_name,
schema=key_schema,
throughput=throughput,
attr=attr,
global_indexes=global_indexes,
indexes=local_secondary_indexes)
if table is not None:
return dynamo_json_dump(table.describe())
else:
@ -86,7 +96,7 @@ class DynamoHandler(BaseResponse):
def delete_table(self):
name = self.body['TableName']
table = dynamodb_backend2.delete_table(name)
table = self.dynamodb_backend.delete_table(name)
if table is not None:
return dynamo_json_dump(table.describe())
else:
@ -94,15 +104,21 @@ class DynamoHandler(BaseResponse):
return self.error(er, 'Requested resource not found')
def tag_resource(self):
tags = self.body['Tags']
table_arn = self.body['ResourceArn']
dynamodb_backend2.tag_resource(table_arn, tags)
return json.dumps({})
tags = self.body['Tags']
self.dynamodb_backend.tag_resource(table_arn, tags)
return ''
def untag_resource(self):
table_arn = self.body['ResourceArn']
tags = self.body['TagKeys']
self.dynamodb_backend.untag_resource(table_arn, tags)
return ''
def list_tags_of_resource(self):
try:
table_arn = self.body['ResourceArn']
all_tags = dynamodb_backend2.list_tags_of_resource(table_arn)
all_tags = self.dynamodb_backend.list_tags_of_resource(table_arn)
all_tag_keys = [tag['Key'] for tag in all_tags]
marker = self.body.get('NextToken')
if marker:
@ -125,17 +141,17 @@ class DynamoHandler(BaseResponse):
def update_table(self):
name = self.body['TableName']
if 'GlobalSecondaryIndexUpdates' in self.body:
table = dynamodb_backend2.update_table_global_indexes(
table = self.dynamodb_backend.update_table_global_indexes(
name, self.body['GlobalSecondaryIndexUpdates'])
if 'ProvisionedThroughput' in self.body:
throughput = self.body["ProvisionedThroughput"]
table = dynamodb_backend2.update_table_throughput(name, throughput)
table = self.dynamodb_backend.update_table_throughput(name, throughput)
return dynamo_json_dump(table.describe())
def describe_table(self):
name = self.body['TableName']
try:
table = dynamodb_backend2.tables[name]
table = self.dynamodb_backend.tables[name]
except KeyError:
er = 'com.amazonaws.dynamodb.v20111205#ResourceNotFoundException'
return self.error(er, 'Requested resource not found')
@ -186,8 +202,7 @@ class DynamoHandler(BaseResponse):
expected[not_exists_m.group(1)] = {'Exists': False}
try:
result = dynamodb_backend2.put_item(
name, item, expected, overwrite)
result = self.dynamodb_backend.put_item(name, item, expected, overwrite)
except ValueError:
er = 'com.amazonaws.dynamodb.v20111205#ConditionalCheckFailedException'
return self.error(er, 'A condition specified in the operation could not be evaluated.')
@ -212,10 +227,10 @@ class DynamoHandler(BaseResponse):
request = list(table_request.values())[0]
if request_type == 'PutRequest':
item = request['Item']
dynamodb_backend2.put_item(table_name, item)
self.dynamodb_backend.put_item(table_name, item)
elif request_type == 'DeleteRequest':
keys = request['Key']
item = dynamodb_backend2.delete_item(table_name, keys)
item = self.dynamodb_backend.delete_item(table_name, keys)
response = {
"ConsumedCapacity": [
@ -235,7 +250,7 @@ class DynamoHandler(BaseResponse):
name = self.body['TableName']
key = self.body['Key']
try:
item = dynamodb_backend2.get_item(name, key)
item = self.dynamodb_backend.get_item(name, key)
except ValueError:
er = 'com.amazon.coral.validate#ValidationException'
return self.error(er, 'Validation Exception')
@ -266,7 +281,7 @@ class DynamoHandler(BaseResponse):
attributes_to_get = table_request.get('AttributesToGet')
results["Responses"][table_name] = []
for key in keys:
item = dynamodb_backend2.get_item(table_name, key)
item = self.dynamodb_backend.get_item(table_name, key)
if item:
item_describe = item.describe_attrs(attributes_to_get)
results["Responses"][table_name].append(
@ -283,7 +298,9 @@ class DynamoHandler(BaseResponse):
# {u'KeyConditionExpression': u'#n0 = :v0', u'ExpressionAttributeValues': {u':v0': {u'S': u'johndoe'}}, u'ExpressionAttributeNames': {u'#n0': u'username'}}
key_condition_expression = self.body.get('KeyConditionExpression')
projection_expression = self.body.get('ProjectionExpression')
expression_attribute_names = self.body.get('ExpressionAttributeNames')
expression_attribute_names = self.body.get('ExpressionAttributeNames', {})
filter_expression = self.body.get('FilterExpression')
expression_attribute_values = self.body.get('ExpressionAttributeValues', {})
if projection_expression and expression_attribute_names:
expressions = [x.strip() for x in projection_expression.split(',')]
@ -292,10 +309,11 @@ class DynamoHandler(BaseResponse):
projection_expression = projection_expression.replace(expression, expression_attribute_names[expression])
filter_kwargs = {}
if key_condition_expression:
value_alias_map = self.body['ExpressionAttributeValues']
table = dynamodb_backend2.get_table(name)
if key_condition_expression:
value_alias_map = self.body.get('ExpressionAttributeValues', {})
table = self.dynamodb_backend.get_table(name)
# If table does not exist
if table is None:
@ -318,7 +336,7 @@ class DynamoHandler(BaseResponse):
index = table.schema
reverse_attribute_lookup = dict((v, k) for k, v in
six.iteritems(self.body['ExpressionAttributeNames']))
six.iteritems(self.body.get('ExpressionAttributeNames', {})))
if " AND " in key_condition_expression:
expressions = key_condition_expression.split(" AND ", 1)
@ -357,13 +375,14 @@ class DynamoHandler(BaseResponse):
range_values = []
hash_key_value_alias = hash_key_expression.split("=")[1].strip()
hash_key = value_alias_map[hash_key_value_alias]
# Temporary fix until we get proper KeyConditionExpression function
hash_key = value_alias_map.get(hash_key_value_alias, {'S': hash_key_value_alias})
else:
# 'KeyConditions': {u'forum_name': {u'ComparisonOperator': u'EQ', u'AttributeValueList': [{u'S': u'the-key'}]}}
key_conditions = self.body.get('KeyConditions')
query_filters = self.body.get("QueryFilter")
if key_conditions:
hash_key_name, range_key_name = dynamodb_backend2.get_table_keys_name(
hash_key_name, range_key_name = self.dynamodb_backend.get_table_keys_name(
name, key_conditions.keys())
for key, value in key_conditions.items():
if key not in (hash_key_name, range_key_name):
@ -396,9 +415,12 @@ class DynamoHandler(BaseResponse):
exclusive_start_key = self.body.get('ExclusiveStartKey')
limit = self.body.get("Limit")
scan_index_forward = self.body.get("ScanIndexForward")
items, scanned_count, last_evaluated_key = dynamodb_backend2.query(
items, scanned_count, last_evaluated_key = self.dynamodb_backend.query(
name, hash_key, range_comparison, range_values, limit,
exclusive_start_key, scan_index_forward, projection_expression, index_name=index_name, **filter_kwargs)
exclusive_start_key, scan_index_forward, projection_expression, index_name=index_name,
expr_names=expression_attribute_names, expr_values=expression_attribute_values,
filter_expression=filter_expression, **filter_kwargs
)
if items is None:
er = 'com.amazonaws.dynamodb.v20111205#ResourceNotFoundException'
return self.error(er, 'Requested resource not found')
@ -440,12 +462,12 @@ class DynamoHandler(BaseResponse):
limit = self.body.get("Limit")
try:
items, scanned_count, last_evaluated_key = dynamodb_backend2.scan(name, filters,
limit,
exclusive_start_key,
filter_expression,
expression_attribute_names,
expression_attribute_values)
items, scanned_count, last_evaluated_key = self.dynamodb_backend.scan(name, filters,
limit,
exclusive_start_key,
filter_expression,
expression_attribute_names,
expression_attribute_values)
except ValueError as err:
er = 'com.amazonaws.dynamodb.v20111205#ValidationError'
return self.error(er, 'Bad Filter Expression: {0}'.format(err))
@ -476,12 +498,12 @@ class DynamoHandler(BaseResponse):
name = self.body['TableName']
keys = self.body['Key']
return_values = self.body.get('ReturnValues', '')
table = dynamodb_backend2.get_table(name)
table = self.dynamodb_backend.get_table(name)
if not table:
er = 'com.amazonaws.dynamodb.v20120810#ConditionalCheckFailedException'
return self.error(er, 'A condition specified in the operation could not be evaluated.')
item = dynamodb_backend2.delete_item(name, keys)
item = self.dynamodb_backend.delete_item(name, keys)
if item and return_values == 'ALL_OLD':
item_dict = item.to_json()
else:
@ -498,7 +520,7 @@ class DynamoHandler(BaseResponse):
'ExpressionAttributeNames', {})
expression_attribute_values = self.body.get(
'ExpressionAttributeValues', {})
existing_item = dynamodb_backend2.get_item(name, key)
existing_item = self.dynamodb_backend.get_item(name, key)
if 'Expected' in self.body:
expected = self.body['Expected']
@ -534,9 +556,10 @@ class DynamoHandler(BaseResponse):
'\s*([=\+-])\s*', '\\1', update_expression)
try:
item = dynamodb_backend2.update_item(
name, key, update_expression, attribute_updates, expression_attribute_names, expression_attribute_values,
expected)
item = self.dynamodb_backend.update_item(
name, key, update_expression, attribute_updates, expression_attribute_names,
expression_attribute_values, expected
)
except ValueError:
er = 'com.amazonaws.dynamodb.v20111205#ConditionalCheckFailedException'
return self.error(er, 'A condition specified in the operation could not be evaluated.')
@ -553,3 +576,26 @@ class DynamoHandler(BaseResponse):
item_dict['Attributes'] = {}
return dynamo_json_dump(item_dict)
def describe_limits(self):
return json.dumps({
'AccountMaxReadCapacityUnits': 20000,
'TableMaxWriteCapacityUnits': 10000,
'AccountMaxWriteCapacityUnits': 20000,
'TableMaxReadCapacityUnits': 10000
})
def update_time_to_live(self):
name = self.body['TableName']
ttl_spec = self.body['TimeToLiveSpecification']
self.dynamodb_backend.update_ttl(name, ttl_spec)
return json.dumps({'TimeToLiveSpecification': ttl_spec})
def describe_time_to_live(self):
name = self.body['TableName']
ttl_spec = self.dynamodb_backend.describe_ttl(name)
return json.dumps({'TimeToLiveDescription': ttl_spec})

View File

@ -2,10 +2,12 @@ from __future__ import unicode_literals
import copy
import itertools
import ipaddress
import json
import os
import re
import six
import warnings
from pkg_resources import resource_filename
import boto.ec2
@ -44,7 +46,6 @@ from .exceptions import (
InvalidRouteTableIdError,
InvalidRouteError,
InvalidInstanceIdError,
MalformedAMIIdError,
InvalidAMIIdError,
InvalidAMIAttributeItemValueError,
InvalidSnapshotIdError,
@ -112,8 +113,12 @@ from .utils import (
tag_filter_matches,
)
RESOURCES_DIR = os.path.join(os.path.dirname(__file__), 'resources')
INSTANCE_TYPES = json.load(open(os.path.join(RESOURCES_DIR, 'instance_types.json'), 'r'))
INSTANCE_TYPES = json.load(
open(resource_filename(__name__, 'resources/instance_types.json'), 'r')
)
AMIS = json.load(
open(resource_filename(__name__, 'resources/amis.json'), 'r')
)
def utc_date_and_time():
@ -372,6 +377,7 @@ class Instance(TaggedEC2Resource, BotoInstance):
self.subnet_id = kwargs.get("subnet_id")
in_ec2_classic = not bool(self.subnet_id)
self.key_name = kwargs.get("key_name")
self.ebs_optimized = kwargs.get("ebs_optimized", False)
self.source_dest_check = "true"
self.launch_time = utc_date_and_time()
self.disable_api_termination = kwargs.get("disable_api_termination", False)
@ -383,6 +389,11 @@ class Instance(TaggedEC2Resource, BotoInstance):
amis = self.ec2_backend.describe_images(filters={'image-id': image_id})
ami = amis[0] if amis else None
if ami is None:
warnings.warn('Could not find AMI with image-id:{0}, '
'in the near future this will '
'cause an error'.format(image_id),
PendingDeprecationWarning)
self.platform = ami.platform if ami else None
self.virtualization_type = ami.virtualization_type if ami else 'paravirtual'
@ -402,6 +413,10 @@ class Instance(TaggedEC2Resource, BotoInstance):
subnet = ec2_backend.get_subnet(self.subnet_id)
self.vpc_id = subnet.vpc_id
self._placement.zone = subnet.availability_zone
if associate_public_ip is None:
# Mapping public ip hasnt been explicitly enabled or disabled
associate_public_ip = subnet.map_public_ip_on_launch == 'true'
elif placement:
self._placement.zone = placement
else:
@ -409,10 +424,22 @@ class Instance(TaggedEC2Resource, BotoInstance):
self.block_device_mapping = BlockDeviceMapping()
self.prep_nics(kwargs.get("nics", {}),
subnet_id=self.subnet_id,
private_ip=kwargs.get("private_ip"),
associate_public_ip=associate_public_ip)
self._private_ips = set()
self.prep_nics(
kwargs.get("nics", {}),
private_ip=kwargs.get("private_ip"),
associate_public_ip=associate_public_ip
)
def __del__(self):
try:
subnet = self.ec2_backend.get_subnet(self.subnet_id)
for ip in self._private_ips:
subnet.del_subnet_ip(ip)
except Exception:
# Its not "super" critical we clean this up, as reset will do this
# worst case we'll get IP address exaustion... rarely
pass
def setup_defaults(self):
# Default have an instance with root volume should you not wish to
@ -547,14 +574,23 @@ class Instance(TaggedEC2Resource, BotoInstance):
else:
return self.security_groups
def prep_nics(self, nic_spec, subnet_id=None, private_ip=None, associate_public_ip=None):
def prep_nics(self, nic_spec, private_ip=None, associate_public_ip=None):
self.nics = {}
if not private_ip:
if self.subnet_id:
subnet = self.ec2_backend.get_subnet(self.subnet_id)
if not private_ip:
private_ip = subnet.get_available_subnet_ip(instance=self)
else:
subnet.request_ip(private_ip, instance=self)
self._private_ips.add(private_ip)
elif private_ip is None:
# Preserve old behaviour if in EC2-Classic mode
private_ip = random_private_ip()
# Primary NIC defaults
primary_nic = {'SubnetId': subnet_id,
primary_nic = {'SubnetId': self.subnet_id,
'PrivateIpAddress': private_ip,
'AssociatePublicIpAddress': associate_public_ip}
primary_nic = dict((k, v) for k, v in primary_nic.items() if v)
@ -765,14 +801,12 @@ class InstanceBackend(object):
associated with the given instance_ids.
"""
reservations = []
for reservation in self.all_reservations(make_copy=True):
for reservation in self.all_reservations():
reservation_instance_ids = [
instance.id for instance in reservation.instances]
matching_reservation = any(
instance_id in reservation_instance_ids for instance_id in instance_ids)
if matching_reservation:
# We need to make a copy of the reservation because we have to modify the
# instances to limit to those requested
reservation.instances = [
instance for instance in reservation.instances if instance.id in instance_ids]
reservations.append(reservation)
@ -786,15 +820,8 @@ class InstanceBackend(object):
reservations = filter_reservations(reservations, filters)
return reservations
def all_reservations(self, make_copy=False, filters=None):
if make_copy:
# Return copies so that other functions can modify them with changing
# the originals
reservations = [copy.deepcopy(reservation)
for reservation in self.reservations.values()]
else:
reservations = [
reservation for reservation in self.reservations.values()]
def all_reservations(self, filters=None):
reservations = [copy.copy(reservation) for reservation in self.reservations.values()]
if filters is not None:
reservations = filter_reservations(reservations, filters)
return reservations
@ -984,17 +1011,31 @@ class TagBackend(object):
class Ami(TaggedEC2Resource):
def __init__(self, ec2_backend, ami_id, instance=None, source_ami=None,
name=None, description=None):
name=None, description=None, owner_id=None,
public=False, virtualization_type=None, architecture=None,
state='available', creation_date=None, platform=None,
image_type='machine', image_location=None, hypervisor=None,
root_device_type=None, root_device_name=None, sriov='simple',
region_name='us-east-1a'
):
self.ec2_backend = ec2_backend
self.id = ami_id
self.state = "available"
self.state = state
self.name = name
self.image_type = image_type
self.image_location = image_location
self.owner_id = owner_id
self.description = description
self.virtualization_type = None
self.architecture = None
self.virtualization_type = virtualization_type
self.architecture = architecture
self.kernel_id = None
self.platform = None
self.creation_date = utc_date_and_time()
self.platform = platform
self.hypervisor = hypervisor
self.root_device_name = root_device_name
self.root_device_type = root_device_type
self.sriov = sriov
self.creation_date = utc_date_and_time() if creation_date is None else creation_date
if instance:
self.instance = instance
@ -1022,8 +1063,11 @@ class Ami(TaggedEC2Resource):
self.launch_permission_groups = set()
self.launch_permission_users = set()
if public:
self.launch_permission_groups.add('all')
# AWS auto-creates these, we should reflect the same.
volume = self.ec2_backend.create_volume(15, "us-east-1a")
volume = self.ec2_backend.create_volume(15, region_name)
self.ebs_snapshot = self.ec2_backend.create_snapshot(
volume.id, "Auto-created snapshot for AMI %s" % self.id)
@ -1050,6 +1094,8 @@ class Ami(TaggedEC2Resource):
return self.state
elif filter_name == 'name':
return self.name
elif filter_name == 'owner-id':
return self.owner_id
else:
return super(Ami, self).get_filter_value(
filter_name, 'DescribeImages')
@ -1058,14 +1104,22 @@ class Ami(TaggedEC2Resource):
class AmiBackend(object):
def __init__(self):
self.amis = {}
self._load_amis()
super(AmiBackend, self).__init__()
def create_image(self, instance_id, name=None, description=None):
def _load_amis(self):
for ami in AMIS:
ami_id = ami['ami_id']
self.amis[ami_id] = Ami(self, **ami)
def create_image(self, instance_id, name=None, description=None, owner_id=None):
# TODO: check that instance exists and pull info from it.
ami_id = random_ami_id()
instance = self.get_instance(instance_id)
ami = Ami(self, ami_id, instance=instance, source_ami=None,
name=name, description=description)
name=name, description=description, owner_id=owner_id)
self.amis[ami_id] = ami
return ami
@ -1078,30 +1132,29 @@ class AmiBackend(object):
self.amis[ami_id] = ami
return ami
def describe_images(self, ami_ids=(), filters=None, exec_users=None):
images = []
def describe_images(self, ami_ids=(), filters=None, exec_users=None, owners=None):
images = self.amis.values()
# Limit images by launch permissions
if exec_users:
for ami_id in self.amis:
found = False
tmp_images = []
for ami in images:
for user_id in exec_users:
if user_id in self.amis[ami_id].launch_permission_users:
found = True
if found:
images.append(self.amis[ami_id])
if images == []:
return images
if user_id in ami.launch_permission_users:
tmp_images.append(ami)
images = tmp_images
# Limit by owner ids
if owners:
images = [ami for ami in images if ami.owner_id in owners]
if ami_ids:
images = [ami for ami in images if ami.id in ami_ids]
# Generic filters
if filters:
images = images or self.amis.values()
return generic_filter(filters, images)
else:
for ami_id in ami_ids:
if ami_id in self.amis:
images.append(self.amis[ami_id])
elif not ami_id.startswith("ami-"):
raise MalformedAMIIdError(ami_id)
else:
raise InvalidAMIIdError(ami_id)
return images or self.amis.values()
return images
def deregister_image(self, ami_id):
if ami_id in self.amis:
@ -2123,10 +2176,17 @@ class Subnet(TaggedEC2Resource):
self.id = subnet_id
self.vpc_id = vpc_id
self.cidr_block = cidr_block
self.cidr = ipaddress.ip_network(six.text_type(self.cidr_block))
self._availability_zone = availability_zone
self.default_for_az = default_for_az
self.map_public_ip_on_launch = map_public_ip_on_launch
# Theory is we assign ip's as we go (as 16,777,214 usable IPs in a /8)
self._subnet_ip_generator = self.cidr.hosts()
self.reserved_ips = [six.next(self._subnet_ip_generator) for _ in range(0, 3)] # Reserved by AWS
self._unused_ips = set() # if instance is destroyed hold IP here for reuse
self._subnet_ips = {} # has IP: instance
@classmethod
def create_from_cloudformation_json(cls, resource_name, cloudformation_json, region_name):
properties = cloudformation_json['Properties']
@ -2193,6 +2253,46 @@ class Subnet(TaggedEC2Resource):
'"Fn::GetAtt" : [ "{0}" , "AvailabilityZone" ]"')
raise UnformattedGetAttTemplateException()
def get_available_subnet_ip(self, instance):
try:
new_ip = self._unused_ips.pop()
except KeyError:
new_ip = six.next(self._subnet_ip_generator)
# Skips any IP's if they've been manually specified
while str(new_ip) in self._subnet_ips:
new_ip = six.next(self._subnet_ip_generator)
if new_ip == self.cidr.broadcast_address:
raise StopIteration() # Broadcast address cant be used obviously
# TODO StopIteration will be raised if no ip's available, not sure how aws handles this.
new_ip = str(new_ip)
self._subnet_ips[new_ip] = instance
return new_ip
def request_ip(self, ip, instance):
if ipaddress.ip_address(ip) not in self.cidr:
raise Exception('IP does not fall in the subnet CIDR of {0}'.format(self.cidr))
if ip in self._subnet_ips:
raise Exception('IP already in use')
try:
self._unused_ips.remove(ip)
except KeyError:
pass
self._subnet_ips[ip] = instance
return ip
def del_subnet_ip(self, ip):
try:
del self._subnet_ips[ip]
self._unused_ips.add(ip)
except KeyError:
pass # Unknown IP
class SubnetBackend(object):
def __init__(self):
@ -3615,8 +3715,8 @@ class NatGatewayBackend(object):
return self.nat_gateways.pop(nat_gateway_id)
class EC2Backend(BaseBackend, InstanceBackend, TagBackend, AmiBackend,
RegionsAndZonesBackend, SecurityGroupBackend, EBSBackend,
class EC2Backend(BaseBackend, InstanceBackend, TagBackend, EBSBackend,
RegionsAndZonesBackend, SecurityGroupBackend, AmiBackend,
VPCBackend, SubnetBackend, SubnetRouteTableAssociationBackend,
NetworkInterfaceBackend, VPNConnectionBackend,
VPCPeeringConnectionBackend,

View File

@ -0,0 +1,546 @@
[
{
"ami_id": "ami-03cf127a",
"state": "available",
"public": true,
"owner_id": "801119661308",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "Microsoft Windows Server 2016 Nano Locale English AMI provided by Amazon",
"image_type": "machine",
"platform": "windows",
"architecture": "x86_64",
"name": "Windows_Server-2016-English-Nano-Base-2017.10.13",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-12c6146b",
"state": "available",
"public": true,
"owner_id": "801119661308",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "Microsoft Windows Server 2008 R2 SP1 Datacenter 64-bit Locale English Base AMI provided by Amazon",
"image_type": "machine",
"platform": "windows",
"architecture": "x86_64",
"name": "Windows_Server-2008-R2_SP1-English-64Bit-Base-2017.10.13",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-1812c061",
"state": "available",
"public": true,
"owner_id": "801119661308",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "Microsoft Windows Server 2016 Locale English with SQL Standard 2016 AMI provided by Amazon",
"image_type": "machine",
"platform": "windows",
"architecture": "x86_64",
"name": "Windows_Server-2016-English-Full-SQL_2016_SP1_Standard-2017.10.13",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-1e749f67",
"state": "available",
"public": true,
"owner_id": "099720109477",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "Canonical, Ubuntu, 14.04 LTS, amd64 trusty image build on 2017-07-27",
"image_type": "machine",
"platform": null,
"architecture": "x86_64",
"name": "ubuntu/images/hvm-ssd/ubuntu-trusty-14.04-amd64-server-20170727",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-1ecc1e67",
"state": "available",
"public": true,
"owner_id": "801119661308",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "Microsoft Windows Server 2012 R2 RTM 64-bit Locale English AMI provided by Amazon",
"image_type": "machine",
"platform": "windows",
"architecture": "x86_64",
"name": "Windows_Server-2012-R2_RTM-English-64Bit-Base-2017.10.13",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-1f12c066",
"state": "available",
"public": true,
"owner_id": "801119661308",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "Microsoft Windows Server 2016 Locale English with SQL Express 2016 AMI provided by Amazon",
"image_type": "machine",
"platform": "windows",
"architecture": "x86_64",
"name": "Windows_Server-2016-English-Full-SQL_2016_SP1_Express-2017.10.13",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-24f3215d",
"state": "available",
"public": true,
"owner_id": "801119661308",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "Microsoft Windows Server 2012 R2 RTM 64-bit Locale English with SQL Web 2014 AMI provided by Amazon",
"image_type": "machine",
"platform": "windows",
"architecture": "x86_64",
"name": "Windows_Server-2012-R2_RTM-English-64Bit-SQL_2014_SP2_Web-2017.10.13",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-35e92e4c",
"state": "available",
"public": true,
"owner_id": "013907871322",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "SUSE Linux Enterprise Server 12 SP3 (HVM, 64-bit, SSD-Backed)",
"image_type": "machine",
"platform": null,
"architecture": "x86_64",
"name": "suse-sles-12-sp3-v20170907-hvm-ssd-x86_64",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-3bf32142",
"state": "available",
"public": true,
"owner_id": "801119661308",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "Microsoft Windows Server 2012 R2 RTM 64-bit Locale English with SQL Express 2016 AMI provided by Amazon",
"image_type": "machine",
"platform": "windows",
"architecture": "x86_64",
"name": "Windows_Server-2012-R2_RTM-English-64Bit-SQL_2016_SP1_Express-2017.10.13",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-3df32144",
"state": "available",
"public": true,
"owner_id": "801119661308",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "Microsoft Windows Server 2012 R2 RTM 64-bit Locale English with SQL Enterprise 2016 AMI provided by Amazon",
"image_type": "machine",
"platform": "windows",
"architecture": "x86_64",
"name": "Windows_Server-2012-R2_RTM-English-64Bit-SQL_2016_SP1_Enterprise-2017.10.13",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-56ec3e2f",
"state": "available",
"public": true,
"owner_id": "801119661308",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "Microsoft Windows Server 2016 Locale English with SQL Express 2017 AMI provided by Amazon",
"image_type": "machine",
"platform": "windows",
"architecture": "x86_64",
"name": "Windows_Server-2016-English-Full-SQL_2017_Express-2017.10.13",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-61db0918",
"state": "available",
"public": true,
"owner_id": "801119661308",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "Microsoft Windows Server 2003 R2 SP2 Datacenter 64-bit Locale English Base AMI provided by Amazon",
"image_type": "machine",
"platform": "windows",
"architecture": "x86_64",
"name": "Windows_Server-2003-R2_SP2-English-64Bit-Base-2017.10.13",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-6ef02217",
"state": "available",
"public": true,
"owner_id": "801119661308",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "Microsoft Windows Server 2012 R2 RTM 64-bit Locale English with SQL Web 2016 AMI provided by Amazon",
"image_type": "machine",
"platform": "windows",
"architecture": "x86_64",
"name": "Windows_Server-2012-R2_RTM-English-64Bit-SQL_2016_SP1_Web-2017.10.13",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-760aaa0f",
"state": "available",
"public": true,
"owner_id": "137112412989",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/xvda",
"description": "Amazon Linux AMI 2017.09.1.20171103 x86_64 HVM GP2",
"image_type": "machine",
"platform": null,
"architecture": "x86_64",
"name": "amzn-ami-hvm-2017.09.1.20171103-x86_64-gp2",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-77ed3f0e",
"state": "available",
"public": true,
"owner_id": "801119661308",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "Microsoft Windows Server 2016 Full Locale English with SQL Enterprise 2016 SP1 AMI provided by Amazon",
"image_type": "machine",
"platform": "windows",
"architecture": "x86_64",
"name": "Windows_Server-2016-English-Full-SQL_2016_SP1_Enterprise-2017.10.13",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-785db401",
"state": "available",
"public": true,
"owner_id": "099720109477",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "Canonical, Ubuntu, 16.04 LTS, amd64 xenial image build on 2017-07-21",
"image_type": "machine",
"platform": null,
"architecture": "x86_64",
"name": "ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-20170721",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-8104a4f8",
"state": "available",
"public": true,
"owner_id": "137112412989",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "Amazon Linux AMI 2017.09.1.20171103 x86_64 PV EBS",
"image_type": "machine",
"platform": null,
"architecture": "x86_64",
"name": "amzn-ami-pv-2017.09.1.20171103-x86_64-ebs",
"virtualization_type": "paravirtual",
"hypervisor": "xen"
},
{
"ami_id": "ami-84ee3cfd",
"state": "available",
"public": true,
"owner_id": "801119661308",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "Microsoft Windows Server 2016 Locale English with SQL Web 2017 AMI provided by Amazon",
"image_type": "machine",
"platform": "windows",
"architecture": "x86_64",
"name": "Windows_Server-2016-English-Full-SQL_2017_Web-2017.10.13",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-86ee3cff",
"state": "available",
"public": true,
"owner_id": "801119661308",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "Microsoft Windows Server 2016 Locale English with SQL Standard 2017 AMI provided by Amazon",
"image_type": "machine",
"platform": "windows",
"architecture": "x86_64",
"name": "Windows_Server-2016-English-Full-SQL_2017_Standard-2017.10.13",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-999844e0",
"state": "available",
"public": true,
"owner_id": "898082745236",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/xvda",
"description": "Deep Learning on Amazon Linux with MXNet, Tensorflow, Caffe, Theano, Torch, CNTK and Keras",
"image_type": "machine",
"platform": null,
"architecture": "x86_64",
"name": "Deep Learning AMI Amazon Linux - 3.3_Oct2017",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-9b32e8e2",
"state": "available",
"public": true,
"owner_id": "898082745236",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "CUDA9 Classic Ubuntu DLAMI 1508914531",
"image_type": "machine",
"platform": null,
"architecture": "x86_64",
"name": "Ubuntu CUDA9 DLAMI with MXNet/TF/Caffe2",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-a9cc1ed0",
"state": "available",
"public": true,
"owner_id": "801119661308",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "Microsoft Windows Server 2012 R2 RTM 64-bit Locale English with SQL Standard 2014 AMI provided by Amazon",
"image_type": "machine",
"platform": "windows",
"architecture": "x86_64",
"name": "Windows_Server-2012-R2_RTM-English-64Bit-SQL_2014_SP2_Standard-2017.10.13",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-afee3cd6",
"state": "available",
"public": true,
"owner_id": "801119661308",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "Microsoft Windows Server 2016 Locale English with SQL Web 2016 SP1 AMI provided by Amazon",
"image_type": "machine",
"platform": "windows",
"architecture": "x86_64",
"name": "Windows_Server-2016-English-Full-SQL_2016_SP1_Web-2017.10.13",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-b7e93bce",
"state": "available",
"public": true,
"owner_id": "801119661308",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "Microsoft Windows Server 2016 with Desktop Experience Locale English AMI provided by Amazon",
"image_type": "machine",
"platform": "windows",
"architecture": "x86_64",
"name": "Windows_Server-2016-English-Full-Base-2017.10.13",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-bb9a6bc2",
"state": "available",
"public": true,
"owner_id": "309956199498",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "Provided by Red Hat, Inc.",
"image_type": "machine",
"platform": null,
"architecture": "x86_64",
"name": "RHEL-7.4_HVM_GA-20170808-x86_64-2-Hourly2-GP2",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-bceb39c5",
"state": "available",
"public": true,
"owner_id": "801119661308",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "Microsoft Windows Server 2016 with Containers Locale English AMI provided by Amazon",
"image_type": "machine",
"platform": "windows",
"architecture": "x86_64",
"name": "Windows_Server-2016-English-Full-Containers-2017.10.13",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-c2ff2dbb",
"state": "available",
"public": true,
"owner_id": "801119661308",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "Microsoft Windows Server 2012 RTM 64-bit Locale English Base AMI provided by Amazon",
"image_type": "machine",
"platform": "windows",
"architecture": "x86_64",
"name": "Windows_Server-2012-RTM-English-64Bit-Base-2017.10.13",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-c6f321bf",
"state": "available",
"public": true,
"owner_id": "801119661308",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "Microsoft Windows Server 2012 R2 RTM 64-bit Locale English with SQL Express 2014 AMI provided by Amazon",
"image_type": "machine",
"platform": "windows",
"architecture": "x86_64",
"name": "Windows_Server-2012-R2_RTM-English-64Bit-SQL_2014_SP2_Express-2017.10.13",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-d1cb19a8",
"state": "available",
"public": true,
"owner_id": "801119661308",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "Microsoft Windows Server 2008 SP2 Datacenter 64-bit Locale English Base AMI provided by Amazon",
"image_type": "machine",
"platform": "windows",
"architecture": "x86_64",
"name": "Windows_Server-2008-SP2-English-64Bit-Base-2017.10.13",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-dca37ea5",
"state": "available",
"public": true,
"owner_id": "898082745236",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "Deep Learning on Ubuntu Linux with MXNet, Tensorflow, Caffe, Theano, Torch, CNTK and Keras",
"image_type": "machine",
"platform": null,
"architecture": "x86_64",
"name": "Deep Learning AMI Ubuntu Linux - 2.4_Oct2017",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-f0e83a89",
"state": "available",
"public": true,
"owner_id": "801119661308",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "Microsoft Windows Server 2016 Locale English with SQL Enterprise 2017 AMI provided by Amazon",
"image_type": "machine",
"platform": "windows",
"architecture": "x86_64",
"name": "Windows_Server-2016-English-Full-SQL_2017_Enterprise-2017.10.13",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-f4cf1d8d",
"state": "available",
"public": true,
"owner_id": "801119661308",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda1",
"description": "Microsoft Windows Server 2012 R2 RTM 64-bit Locale English with SQL Standard 2016 AMI provided by Amazon",
"image_type": "machine",
"platform": "windows",
"architecture": "x86_64",
"name": "Windows_Server-2012-R2_RTM-English-64Bit-SQL_2016_SP1_Standard-2017.10.13",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-f8e54081",
"state": "available",
"public": true,
"owner_id": "898082745236",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/xvda",
"description": "CUDA9 Classic Amazon Linux DLAMI 1508914924",
"image_type": "machine",
"platform": null,
"architecture": "x86_64",
"name": "CUDA9ClassicAmazonLinuxDLAMIwithMXNetTensorflowandCaffe2 ",
"virtualization_type": "hvm",
"hypervisor": "xen"
},
{
"ami_id": "ami-fa7cdd89",
"state": "available",
"public": true,
"owner_id": "013907871322",
"sriov": "simple",
"root_device_type": "ebs",
"root_device_name": "/dev/sda",
"description": "SUSE Linux Enterprise Server 11 Service Pack 4 ((PV, 64-bit, SSD-Backed)",
"image_type": "machine",
"platform": null,
"architecture": "x86_64",
"name": "suse-sles-11-sp4-v20151207-pv-ssd-x86_64",
"virtualization_type": "paravirtual",
"hypervisor": "xen"
}
]

View File

@ -36,9 +36,10 @@ class AmisResponse(BaseResponse):
def describe_images(self):
ami_ids = self._get_multi_param('ImageId')
filters = filters_from_querystring(self.querystring)
owners = self._get_multi_param('Owner')
exec_users = self._get_multi_param('ExecutableBy')
images = self.ec2_backend.describe_images(
ami_ids=ami_ids, filters=filters, exec_users=exec_users)
ami_ids=ami_ids, filters=filters, exec_users=exec_users, owners=owners)
template = self.response_template(DESCRIBE_IMAGES_RESPONSE)
return template.render(images=images)
@ -92,12 +93,12 @@ DESCRIBE_IMAGES_RESPONSE = """<DescribeImagesResponse xmlns="http://ec2.amazonaw
{% for image in images %}
<item>
<imageId>{{ image.id }}</imageId>
<imageLocation>amazon/getting-started</imageLocation>
<imageLocation>{{ image.image_location }}</imageLocation>
<imageState>{{ image.state }}</imageState>
<imageOwnerId>123456789012</imageOwnerId>
<imageOwnerId>{{ image.owner_id }}</imageOwnerId>
<isPublic>{{ image.is_public_string }}</isPublic>
<architecture>{{ image.architecture }}</architecture>
<imageType>machine</imageType>
<imageType>{{ image.image_type }}</imageType>
<kernelId>{{ image.kernel_id }}</kernelId>
<ramdiskId>ari-1a2b3c4d</ramdiskId>
<imageOwnerAlias>amazon</imageOwnerAlias>
@ -107,8 +108,8 @@ DESCRIBE_IMAGES_RESPONSE = """<DescribeImagesResponse xmlns="http://ec2.amazonaw
<platform>{{ image.platform }}</platform>
{% endif %}
<description>{{ image.description }}</description>
<rootDeviceType>ebs</rootDeviceType>
<rootDeviceName>/dev/sda1</rootDeviceName>
<rootDeviceType>{{ image.root_device_type }}</rootDeviceType>
<rootDeviceName>{{ image.root_device_name }}</rootDeviceName>
<blockDeviceMapping>
<item>
<deviceName>/dev/sda1</deviceName>

View File

@ -16,8 +16,7 @@ class InstanceResponse(BaseResponse):
reservations = self.ec2_backend.get_reservations_by_instance_ids(
instance_ids, filters=filter_dict)
else:
reservations = self.ec2_backend.all_reservations(
make_copy=True, filters=filter_dict)
reservations = self.ec2_backend.all_reservations(filters=filter_dict)
reservation_ids = [reservation.id for reservation in reservations]
if token:
@ -30,11 +29,12 @@ class InstanceResponse(BaseResponse):
if max_results and len(reservations) > (start + max_results):
next_token = reservations_resp[-1].id
template = self.response_template(EC2_DESCRIBE_INSTANCES)
return template.render(reservations=reservations_resp, next_token=next_token)
return template.render(reservations=reservations_resp, next_token=next_token).replace('True', 'true').replace('False', 'false')
def run_instances(self):
min_count = int(self._get_param('MinCount', if_none='1'))
image_id = self._get_param('ImageId')
owner_id = self._get_param('OwnerId')
user_data = self._get_param('UserData')
security_group_names = self._get_multi_param('SecurityGroup')
security_group_ids = self._get_multi_param('SecurityGroupId')
@ -45,6 +45,7 @@ class InstanceResponse(BaseResponse):
private_ip = self._get_param('PrivateIpAddress')
associate_public_ip = self._get_param('AssociatePublicIpAddress')
key_name = self._get_param('KeyName')
ebs_optimized = self._get_param('EbsOptimized')
tags = self._parse_tag_specification("TagSpecification")
region_name = self.region
@ -52,9 +53,9 @@ class InstanceResponse(BaseResponse):
new_reservation = self.ec2_backend.add_instances(
image_id, min_count, user_data, security_group_names,
instance_type=instance_type, placement=placement, region_name=region_name, subnet_id=subnet_id,
key_name=key_name, security_group_ids=security_group_ids,
owner_id=owner_id, key_name=key_name, security_group_ids=security_group_ids,
nics=nics, private_ip=private_ip, associate_public_ip=associate_public_ip,
tags=tags)
tags=tags, ebs_optimized=ebs_optimized)
template = self.response_template(EC2_RUN_INSTANCES)
return template.render(reservation=new_reservation)
@ -144,7 +145,12 @@ class InstanceResponse(BaseResponse):
"""
Handles requests which are generated by code similar to:
instance.modify_attribute('blockDeviceMapping', {'/dev/sda1': True})
instance.modify_attribute(
BlockDeviceMappings=[{
'DeviceName': '/dev/sda1',
'Ebs': {'DeleteOnTermination': True}
}]
)
The querystring contains information similar to:
@ -237,6 +243,7 @@ EC2_RUN_INSTANCES = """<RunInstancesResponse xmlns="http://ec2.amazonaws.com/doc
<dnsName>{{ instance.public_dns }}</dnsName>
<reason/>
<keyName>{{ instance.key_name }}</keyName>
<ebsOptimized>{{ instance.ebs_optimized }}</ebsOptimized>
<amiLaunchIndex>0</amiLaunchIndex>
<instanceType>{{ instance.instance_type }}</instanceType>
<launchTime>{{ instance.launch_time }}</launchTime>
@ -376,6 +383,7 @@ EC2_DESCRIBE_INSTANCES = """<DescribeInstancesResponse xmlns="http://ec2.amazona
<dnsName>{{ instance.public_dns }}</dnsName>
<reason>{{ instance._reason }}</reason>
<keyName>{{ instance.key_name }}</keyName>
<ebsOptimized>{{ instance.ebs_optimized }}</ebsOptimized>
<amiLaunchIndex>0</amiLaunchIndex>
<productCodes/>
<instanceType>{{ instance.instance_type }}</instanceType>

View File

@ -2,8 +2,10 @@ from __future__ import unicode_literals
import uuid
from datetime import datetime
from random import random, randint
import boto3
import pytz
from moto.core.exceptions import JsonRESTError
from moto.core import BaseBackend, BaseModel
from moto.ec2 import ec2_backends
from copy import copy
@ -148,7 +150,7 @@ class Task(BaseObject):
resource_requirements, overrides={}, started_by=''):
self.cluster_arn = cluster.arn
self.task_arn = 'arn:aws:ecs:us-east-1:012345678910:task/{0}'.format(
str(uuid.uuid1()))
str(uuid.uuid4()))
self.container_instance_arn = container_instance_arn
self.last_status = 'RUNNING'
self.desired_status = 'RUNNING'
@ -260,7 +262,7 @@ class Service(BaseObject):
class ContainerInstance(BaseObject):
def __init__(self, ec2_instance_id):
def __init__(self, ec2_instance_id, region_name):
self.ec2_instance_id = ec2_instance_id
self.agent_connected = True
self.status = 'ACTIVE'
@ -288,8 +290,8 @@ class ContainerInstance(BaseObject):
'stringSetValue': [],
'type': 'STRINGSET'}]
self.container_instance_arn = "arn:aws:ecs:us-east-1:012345678910:container-instance/{0}".format(
str(uuid.uuid1()))
self.pending_task_count = 0
str(uuid.uuid4()))
self.pending_tasks_count = 0
self.remaining_resources = [
{'doubleValue': 0.0,
'integerValue': 4096,
@ -314,18 +316,35 @@ class ContainerInstance(BaseObject):
'stringSetValue': [],
'type': 'STRINGSET'}
]
self.running_task_count = 0
self.running_tasks_count = 0
self.version_info = {
'agentVersion': "1.0.0",
'agentHash': '4023248',
'dockerVersion': 'DockerVersion: 1.5.0'
}
ec2_backend = ec2_backends[region_name]
ec2_instance = ec2_backend.get_instance(ec2_instance_id)
self.attributes = {
'ecs.ami-id': ec2_instance.image_id,
'ecs.availability-zone': ec2_instance.placement,
'ecs.instance-type': ec2_instance.instance_type,
'ecs.os-type': ec2_instance.platform if ec2_instance.platform == 'windows' else 'linux' # options are windows and linux, linux is default
}
@property
def response_object(self):
response_object = self.gen_response_object()
response_object['attributes'] = [self._format_attribute(name, value) for name, value in response_object['attributes'].items()]
return response_object
def _format_attribute(self, name, value):
formatted_attr = {
'name': name,
}
if value is not None:
formatted_attr['value'] = value
return formatted_attr
class ContainerInstanceFailure(BaseObject):
@ -344,12 +363,19 @@ class ContainerInstanceFailure(BaseObject):
class EC2ContainerServiceBackend(BaseBackend):
def __init__(self):
def __init__(self, region_name):
super(EC2ContainerServiceBackend, self).__init__()
self.clusters = {}
self.task_definitions = {}
self.tasks = {}
self.services = {}
self.container_instances = {}
self.region_name = region_name
def reset(self):
region_name = self.region_name
self.__dict__ = {}
self.__init__(region_name)
def describe_task_definition(self, task_definition_str):
task_definition_name = task_definition_str.split('/')[-1]
@ -666,7 +692,7 @@ class EC2ContainerServiceBackend(BaseBackend):
cluster_name = cluster_str.split('/')[-1]
if cluster_name not in self.clusters:
raise Exception("{0} is not a cluster".format(cluster_name))
container_instance = ContainerInstance(ec2_instance_id)
container_instance = ContainerInstance(ec2_instance_id, self.region_name)
if not self.container_instances.get(cluster_name):
self.container_instances[cluster_name] = {}
container_instance_id = container_instance.container_instance_arn.split(
@ -737,7 +763,7 @@ class EC2ContainerServiceBackend(BaseBackend):
resource["stringSetValue"].remove(str(port))
else:
resource["stringSetValue"].append(str(port))
container_instance.running_task_count += resource_multiplier * 1
container_instance.running_tasks_count += resource_multiplier * 1
def deregister_container_instance(self, cluster_str, container_instance_str, force):
failures = []
@ -748,11 +774,11 @@ class EC2ContainerServiceBackend(BaseBackend):
container_instance = self.container_instances[cluster_name].get(container_instance_id)
if container_instance is None:
raise Exception("{0} is not a container id in the cluster")
if not force and container_instance.running_task_count > 0:
if not force and container_instance.running_tasks_count > 0:
raise Exception("Found running tasks on the instance.")
# Currently assume that people might want to do something based around deregistered instances
# with tasks left running on them - but nothing if no tasks were running already
elif force and container_instance.running_task_count > 0:
elif force and container_instance.running_tasks_count > 0:
if not self.container_instances.get('orphaned'):
self.container_instances['orphaned'] = {}
self.container_instances['orphaned'][container_instance_id] = container_instance
@ -766,7 +792,102 @@ class EC2ContainerServiceBackend(BaseBackend):
raise Exception("{0} is not a cluster".format(cluster_name))
pass
def put_attributes(self, cluster_name, attributes=None):
if cluster_name is None or cluster_name not in self.clusters:
raise JsonRESTError('ClusterNotFoundException', 'Cluster not found', status=400)
ecs_backends = {}
for region, ec2_backend in ec2_backends.items():
ecs_backends[region] = EC2ContainerServiceBackend()
if attributes is None:
raise JsonRESTError('InvalidParameterException', 'attributes value is required')
for attr in attributes:
self._put_attribute(cluster_name, attr['name'], attr.get('value'), attr.get('targetId'), attr.get('targetType'))
def _put_attribute(self, cluster_name, name, value=None, target_id=None, target_type=None):
if target_id is None and target_type is None:
for instance in self.container_instances[cluster_name].values():
instance.attributes[name] = value
elif target_type is None:
# targetId is full container instance arn
try:
arn = target_id.rsplit('/', 1)[-1]
self.container_instances[cluster_name][arn].attributes[name] = value
except KeyError:
raise JsonRESTError('TargetNotFoundException', 'Could not find {0}'.format(target_id))
else:
# targetId is container uuid, targetType must be container-instance
try:
if target_type != 'container-instance':
raise JsonRESTError('TargetNotFoundException', 'Could not find {0}'.format(target_id))
self.container_instances[cluster_name][target_id].attributes[name] = value
except KeyError:
raise JsonRESTError('TargetNotFoundException', 'Could not find {0}'.format(target_id))
def list_attributes(self, target_type, cluster_name=None, attr_name=None, attr_value=None, max_results=None, next_token=None):
if target_type != 'container-instance':
raise JsonRESTError('InvalidParameterException', 'targetType must be container-instance')
filters = [lambda x: True]
# item will be {0 cluster_name, 1 arn, 2 name, 3 value}
if cluster_name is not None:
filters.append(lambda item: item[0] == cluster_name)
if attr_name:
filters.append(lambda item: item[2] == attr_name)
if attr_name:
filters.append(lambda item: item[3] == attr_value)
all_attrs = []
for cluster_name, cobj in self.container_instances.items():
for container_instance in cobj.values():
for key, value in container_instance.attributes.items():
all_attrs.append((cluster_name, container_instance.container_instance_arn, key, value))
return filter(lambda x: all(f(x) for f in filters), all_attrs)
def delete_attributes(self, cluster_name, attributes=None):
if cluster_name is None or cluster_name not in self.clusters:
raise JsonRESTError('ClusterNotFoundException', 'Cluster not found', status=400)
if attributes is None:
raise JsonRESTError('InvalidParameterException', 'attributes value is required')
for attr in attributes:
self._delete_attribute(cluster_name, attr['name'], attr.get('value'), attr.get('targetId'), attr.get('targetType'))
def _delete_attribute(self, cluster_name, name, value=None, target_id=None, target_type=None):
if target_id is None and target_type is None:
for instance in self.container_instances[cluster_name].values():
if name in instance.attributes and instance.attributes[name] == value:
del instance.attributes[name]
elif target_type is None:
# targetId is full container instance arn
try:
arn = target_id.rsplit('/', 1)[-1]
instance = self.container_instances[cluster_name][arn]
if name in instance.attributes and instance.attributes[name] == value:
del instance.attributes[name]
except KeyError:
raise JsonRESTError('TargetNotFoundException', 'Could not find {0}'.format(target_id))
else:
# targetId is container uuid, targetType must be container-instance
try:
if target_type != 'container-instance':
raise JsonRESTError('TargetNotFoundException', 'Could not find {0}'.format(target_id))
instance = self.container_instances[cluster_name][target_id]
if name in instance.attributes and instance.attributes[name] == value:
del instance.attributes[name]
except KeyError:
raise JsonRESTError('TargetNotFoundException', 'Could not find {0}'.format(target_id))
def list_task_definition_families(self, family_prefix=None, status=None, max_results=None, next_token=None):
for task_fam in self.task_definitions:
if family_prefix is not None and not task_fam.startswith(family_prefix):
continue
yield task_fam
available_regions = boto3.session.Session().get_available_regions("ecs")
ecs_backends = {region: EC2ContainerServiceBackend(region) for region in available_regions}

View File

@ -9,6 +9,12 @@ class EC2ContainerServiceResponse(BaseResponse):
@property
def ecs_backend(self):
"""
ECS Backend
:return: ECS Backend object
:rtype: moto.ecs.models.EC2ContainerServiceBackend
"""
return ecs_backends[self.region]
@property
@ -34,7 +40,7 @@ class EC2ContainerServiceResponse(BaseResponse):
cluster_arns = self.ecs_backend.list_clusters()
return json.dumps({
'clusterArns': cluster_arns
# 'nextToken': str(uuid.uuid1())
# 'nextToken': str(uuid.uuid4())
})
def describe_clusters(self):
@ -66,7 +72,7 @@ class EC2ContainerServiceResponse(BaseResponse):
task_definition_arns = self.ecs_backend.list_task_definitions()
return json.dumps({
'taskDefinitionArns': task_definition_arns
# 'nextToken': str(uuid.uuid1())
# 'nextToken': str(uuid.uuid4())
})
def describe_task_definition(self):
@ -159,7 +165,7 @@ class EC2ContainerServiceResponse(BaseResponse):
return json.dumps({
'serviceArns': service_arns
# ,
# 'nextToken': str(uuid.uuid1())
# 'nextToken': str(uuid.uuid4())
})
def describe_services(self):
@ -245,3 +251,62 @@ class EC2ContainerServiceResponse(BaseResponse):
'failures': [ci.response_object for ci in failures],
'containerInstances': [ci.response_object for ci in container_instances]
})
def put_attributes(self):
cluster_name = self._get_param('cluster')
attributes = self._get_param('attributes')
self.ecs_backend.put_attributes(cluster_name, attributes)
return json.dumps({'attributes': attributes})
def list_attributes(self):
cluster_name = self._get_param('cluster')
attr_name = self._get_param('attributeName')
attr_value = self._get_param('attributeValue')
target_type = self._get_param('targetType')
max_results = self._get_param('maxResults')
next_token = self._get_param('nextToken')
results = self.ecs_backend.list_attributes(target_type, cluster_name, attr_name, attr_value, max_results, next_token)
# Result will be [item will be {0 cluster_name, 1 arn, 2 name, 3 value}]
formatted_results = []
for _, arn, name, value in results:
tmp_result = {
'name': name,
'targetId': arn
}
if value is not None:
tmp_result['value'] = value
formatted_results.append(tmp_result)
return json.dumps({'attributes': formatted_results})
def delete_attributes(self):
cluster_name = self._get_param('cluster')
attributes = self._get_param('attributes')
self.ecs_backend.delete_attributes(cluster_name, attributes)
return json.dumps({'attributes': attributes})
def discover_poll_endpoint(self):
# Here are the arguments, this api is used by the ecs client so obviously no decent
# documentation. Hence I've responded with valid but useless data
# cluster_name = self._get_param('cluster')
# instance = self._get_param('containerInstance')
return json.dumps({
'endpoint': 'http://localhost',
'telemetryEndpoint': 'http://localhost'
})
def list_task_definition_families(self):
family_prefix = self._get_param('familyPrefix')
status = self._get_param('status')
max_results = self._get_param('maxResults')
next_token = self._get_param('nextToken')
results = self.ecs_backend.list_task_definition_families(family_prefix, status, max_results, next_token)
return json.dumps({'families': list(results)})

View File

@ -3,8 +3,12 @@ from __future__ import unicode_literals
import datetime
import re
from moto.compat import OrderedDict
from moto.core.exceptions import RESTError
from moto.core import BaseBackend, BaseModel
from moto.ec2.models import ec2_backends
from moto.acm.models import acm_backends
from .utils import make_arn_for_target_group
from .utils import make_arn_for_load_balancer
from .exceptions import (
DuplicateLoadBalancerName,
DuplicateListenerError,
@ -40,33 +44,44 @@ class FakeHealthStatus(BaseModel):
class FakeTargetGroup(BaseModel):
HTTP_CODE_REGEX = re.compile(r'(?:(?:\d+-\d+|\d+),?)+')
def __init__(self,
name,
arn,
vpc_id,
protocol,
port,
healthcheck_protocol,
healthcheck_port,
healthcheck_path,
healthcheck_interval_seconds,
healthcheck_timeout_seconds,
healthy_threshold_count,
unhealthy_threshold_count):
healthcheck_protocol=None,
healthcheck_port=None,
healthcheck_path=None,
healthcheck_interval_seconds=None,
healthcheck_timeout_seconds=None,
healthy_threshold_count=None,
unhealthy_threshold_count=None,
matcher=None,
target_type=None):
# TODO: default values differs when you add Network Load balancer
self.name = name
self.arn = arn
self.vpc_id = vpc_id
self.protocol = protocol
self.port = port
self.healthcheck_protocol = healthcheck_protocol
self.healthcheck_port = healthcheck_port
self.healthcheck_path = healthcheck_path
self.healthcheck_interval_seconds = healthcheck_interval_seconds
self.healthcheck_timeout_seconds = healthcheck_timeout_seconds
self.healthy_threshold_count = healthy_threshold_count
self.unhealthy_threshold_count = unhealthy_threshold_count
self.healthcheck_protocol = healthcheck_protocol or 'HTTP'
self.healthcheck_port = healthcheck_port or 'traffic-port'
self.healthcheck_path = healthcheck_path or '/'
self.healthcheck_interval_seconds = healthcheck_interval_seconds or 30
self.healthcheck_timeout_seconds = healthcheck_timeout_seconds or 5
self.healthy_threshold_count = healthy_threshold_count or 5
self.unhealthy_threshold_count = unhealthy_threshold_count or 2
self.load_balancer_arns = []
self.tags = {}
if matcher is None:
self.matcher = {'HttpCode': '200'}
else:
self.matcher = matcher
self.target_type = target_type
self.attributes = {
'deregistration_delay.timeout_seconds': 300,
@ -75,6 +90,10 @@ class FakeTargetGroup(BaseModel):
self.targets = OrderedDict()
@property
def physical_resource_id(self):
return self.arn
def register(self, targets):
for target in targets:
self.targets[target['id']] = {
@ -99,6 +118,46 @@ class FakeTargetGroup(BaseModel):
raise InvalidTargetError()
return FakeHealthStatus(t['id'], t['port'], self.healthcheck_port, 'healthy')
@classmethod
def create_from_cloudformation_json(cls, resource_name, cloudformation_json, region_name):
properties = cloudformation_json['Properties']
elbv2_backend = elbv2_backends[region_name]
# per cloudformation docs:
# The target group name should be shorter than 22 characters because
# AWS CloudFormation uses the target group name to create the name of the load balancer.
name = properties.get('Name', resource_name[:22])
vpc_id = properties.get("VpcId")
protocol = properties.get('Protocol')
port = properties.get("Port")
healthcheck_protocol = properties.get("HealthCheckProtocol")
healthcheck_port = properties.get("HealthCheckPort")
healthcheck_path = properties.get("HealthCheckPath")
healthcheck_interval_seconds = properties.get("HealthCheckIntervalSeconds")
healthcheck_timeout_seconds = properties.get("HealthCheckTimeoutSeconds")
healthy_threshold_count = properties.get("HealthyThresholdCount")
unhealthy_threshold_count = properties.get("UnhealthyThresholdCount")
matcher = properties.get("Matcher")
target_type = properties.get("TargetType")
target_group = elbv2_backend.create_target_group(
name=name,
vpc_id=vpc_id,
protocol=protocol,
port=port,
healthcheck_protocol=healthcheck_protocol,
healthcheck_port=healthcheck_port,
healthcheck_path=healthcheck_path,
healthcheck_interval_seconds=healthcheck_interval_seconds,
healthcheck_timeout_seconds=healthcheck_timeout_seconds,
healthy_threshold_count=healthy_threshold_count,
unhealthy_threshold_count=unhealthy_threshold_count,
matcher=matcher,
target_type=target_type,
)
return target_group
class FakeListener(BaseModel):
@ -109,6 +168,7 @@ class FakeListener(BaseModel):
self.port = port
self.ssl_policy = ssl_policy
self.certificate = certificate
self.certificates = [certificate] if certificate is not None else []
self.default_actions = default_actions
self._non_default_rules = []
self._default_rule = FakeRule(
@ -119,6 +179,10 @@ class FakeListener(BaseModel):
is_default=True
)
@property
def physical_resource_id(self):
return self.arn
@property
def rules(self):
return self._non_default_rules + [self._default_rule]
@ -130,6 +194,28 @@ class FakeListener(BaseModel):
self._non_default_rules.append(rule)
self._non_default_rules = sorted(self._non_default_rules, key=lambda x: x.priority)
@classmethod
def create_from_cloudformation_json(cls, resource_name, cloudformation_json, region_name):
properties = cloudformation_json['Properties']
elbv2_backend = elbv2_backends[region_name]
load_balancer_arn = properties.get("LoadBalancerArn")
protocol = properties.get("Protocol")
port = properties.get("Port")
ssl_policy = properties.get("SslPolicy")
certificates = properties.get("Certificates")
# transform default actions to confirm with the rest of the code and XML templates
if "DefaultActions" in properties:
default_actions = []
for action in properties['DefaultActions']:
default_actions.append({'type': action['Type'], 'target_group_arn': action['TargetGroupArn']})
else:
default_actions = None
listener = elbv2_backend.create_listener(
load_balancer_arn, protocol, port, ssl_policy, certificates, default_actions)
return listener
class FakeRule(BaseModel):
@ -153,6 +239,8 @@ class FakeBackend(BaseModel):
class FakeLoadBalancer(BaseModel):
VALID_ATTRS = {'access_logs.s3.enabled', 'access_logs.s3.bucket', 'access_logs.s3.prefix',
'deletion_protection.enabled', 'idle_timeout.timeout_seconds'}
def __init__(self, name, security_groups, subnets, vpc_id, arn, dns_name, scheme='internet-facing'):
self.name = name
@ -166,9 +254,18 @@ class FakeLoadBalancer(BaseModel):
self.arn = arn
self.dns_name = dns_name
self.stack = 'ipv4'
self.attrs = {
'access_logs.s3.enabled': 'false',
'access_logs.s3.bucket': None,
'access_logs.s3.prefix': None,
'deletion_protection.enabled': 'false',
'idle_timeout.timeout_seconds': '60'
}
@property
def physical_resource_id(self):
return self.name
return self.arn
def add_tag(self, key, value):
if len(self.tags) >= 10 and key not in self.tags:
@ -186,6 +283,48 @@ class FakeLoadBalancer(BaseModel):
''' Not exposed as part of the ELB API - used for CloudFormation. '''
elbv2_backends[region].delete_load_balancer(self.arn)
@classmethod
def create_from_cloudformation_json(cls, resource_name, cloudformation_json, region_name):
properties = cloudformation_json['Properties']
elbv2_backend = elbv2_backends[region_name]
name = properties.get('Name', resource_name)
security_groups = properties.get("SecurityGroups")
subnet_ids = properties.get('Subnets')
scheme = properties.get('Scheme', 'internet-facing')
load_balancer = elbv2_backend.create_load_balancer(name, security_groups, subnet_ids, scheme=scheme)
return load_balancer
def get_cfn_attribute(self, attribute_name):
'''
Implemented attributes:
* DNSName
* LoadBalancerName
Not implemented:
* CanonicalHostedZoneID
* LoadBalancerFullName
* SecurityGroups
This method is similar to models.py:FakeLoadBalancer.get_cfn_attribute()
'''
from moto.cloudformation.exceptions import UnformattedGetAttTemplateException
not_implemented_yet = [
'CanonicalHostedZoneID',
'LoadBalancerFullName',
'SecurityGroups',
]
if attribute_name == 'DNSName':
return self.dns_name
elif attribute_name == 'LoadBalancerName':
return self.name
elif attribute_name in not_implemented_yet:
raise NotImplementedError('"Fn::GetAtt" : [ "{0}" , "%s" ]"' % attribute_name)
else:
raise UnformattedGetAttTemplateException()
class ELBv2Backend(BaseBackend):
@ -194,6 +333,26 @@ class ELBv2Backend(BaseBackend):
self.target_groups = OrderedDict()
self.load_balancers = OrderedDict()
@property
def ec2_backend(self):
"""
EC2 backend
:return: EC2 Backend
:rtype: moto.ec2.models.EC2Backend
"""
return ec2_backends[self.region_name]
@property
def acm_backend(self):
"""
ACM backend
:return: ACM Backend
:rtype: moto.acm.models.AWSCertificateManagerBackend
"""
return acm_backends[self.region_name]
def reset(self):
region_name = self.region_name
self.__dict__ = {}
@ -201,18 +360,17 @@ class ELBv2Backend(BaseBackend):
def create_load_balancer(self, name, security_groups, subnet_ids, scheme='internet-facing'):
vpc_id = None
ec2_backend = ec2_backends[self.region_name]
subnets = []
if not subnet_ids:
raise SubnetNotFoundError()
for subnet_id in subnet_ids:
subnet = ec2_backend.get_subnet(subnet_id)
subnet = self.ec2_backend.get_subnet(subnet_id)
if subnet is None:
raise SubnetNotFoundError()
subnets.append(subnet)
vpc_id = subnets[0].vpc_id
arn = "arn:aws:elasticloadbalancing:%s:1:loadbalancer/%s/50dc6c495c0c9188" % (self.region_name, name)
arn = make_arn_for_load_balancer(account_id=1, name=name, region_name=self.region_name)
dns_name = "%s-1.%s.elb.amazonaws.com" % (name, self.region_name)
if arn in self.load_balancers:
@ -279,7 +437,7 @@ class ELBv2Backend(BaseBackend):
def create_target_group(self, name, **kwargs):
if len(name) > 32:
raise InvalidTargetGroupNameError(
"Target group name '%s' cannot be longer than '32' characters" % name
"Target group name '%s' cannot be longer than '22' characters" % name
)
if not re.match('^[a-zA-Z0-9\-]+$', name):
raise InvalidTargetGroupNameError(
@ -300,7 +458,20 @@ class ELBv2Backend(BaseBackend):
if target_group.name == name:
raise DuplicateTargetGroupName()
arn = "arn:aws:elasticloadbalancing:%s:1:targetgroup/%s/50dc6c495c0c9188" % (self.region_name, name)
valid_protocols = ['HTTPS', 'HTTP', 'TCP']
if kwargs.get('healthcheck_protocol') and kwargs['healthcheck_protocol'] not in valid_protocols:
raise InvalidConditionValueError(
"Value {} at 'healthCheckProtocol' failed to satisfy constraint: "
"Member must satisfy enum value set: {}".format(kwargs['healthcheck_protocol'], valid_protocols))
if kwargs.get('protocol') and kwargs['protocol'] not in valid_protocols:
raise InvalidConditionValueError(
"Value {} at 'protocol' failed to satisfy constraint: "
"Member must satisfy enum value set: {}".format(kwargs['protocol'], valid_protocols))
if kwargs.get('matcher') and FakeTargetGroup.HTTP_CODE_REGEX.match(kwargs['matcher']['HttpCode']) is None:
raise RESTError('InvalidParameterValue', 'HttpCode must be like 200 | 200-399 | 200,201 ...')
arn = make_arn_for_target_group(account_id=1, name=name, region_name=self.region_name)
target_group = FakeTargetGroup(name, arn, **kwargs)
self.target_groups[target_group.arn] = target_group
return target_group
@ -547,6 +718,166 @@ class ELBv2Backend(BaseBackend):
modified_rules.append(given_rule)
return modified_rules
def set_ip_address_type(self, arn, ip_type):
if ip_type not in ('internal', 'dualstack'):
raise RESTError('InvalidParameterValue', 'IpAddressType must be either internal | dualstack')
balancer = self.load_balancers.get(arn)
if balancer is None:
raise LoadBalancerNotFoundError()
if ip_type == 'dualstack' and balancer.scheme == 'internal':
raise RESTError('InvalidConfigurationRequest', 'Internal load balancers cannot be dualstack')
balancer.stack = ip_type
def set_security_groups(self, arn, sec_groups):
balancer = self.load_balancers.get(arn)
if balancer is None:
raise LoadBalancerNotFoundError()
# Check all security groups exist
for sec_group_id in sec_groups:
if self.ec2_backend.get_security_group_from_id(sec_group_id) is None:
raise RESTError('InvalidSecurityGroup', 'Security group {0} does not exist'.format(sec_group_id))
balancer.security_groups = sec_groups
def set_subnets(self, arn, subnets):
balancer = self.load_balancers.get(arn)
if balancer is None:
raise LoadBalancerNotFoundError()
subnet_objects = []
sub_zone_list = {}
for subnet in subnets:
try:
subnet = self.ec2_backend.get_subnet(subnet)
if subnet.availability_zone in sub_zone_list:
raise RESTError('InvalidConfigurationRequest', 'More than 1 subnet cannot be specified for 1 availability zone')
sub_zone_list[subnet.availability_zone] = subnet.id
subnet_objects.append(subnet)
except Exception:
raise SubnetNotFoundError()
if len(sub_zone_list) < 2:
raise RESTError('InvalidConfigurationRequest', 'More than 1 availability zone must be specified')
balancer.subnets = subnet_objects
return sub_zone_list.items()
def modify_load_balancer_attributes(self, arn, attrs):
balancer = self.load_balancers.get(arn)
if balancer is None:
raise LoadBalancerNotFoundError()
for key in attrs:
if key not in FakeLoadBalancer.VALID_ATTRS:
raise RESTError('InvalidConfigurationRequest', 'Key {0} not valid'.format(key))
balancer.attrs.update(attrs)
return balancer.attrs
def describe_load_balancer_attributes(self, arn):
balancer = self.load_balancers.get(arn)
if balancer is None:
raise LoadBalancerNotFoundError()
return balancer.attrs
def modify_target_group(self, arn, health_check_proto=None, health_check_port=None, health_check_path=None, health_check_interval=None,
health_check_timeout=None, healthy_threshold_count=None, unhealthy_threshold_count=None, http_codes=None):
target_group = self.target_groups.get(arn)
if target_group is None:
raise TargetGroupNotFoundError()
if http_codes is not None and FakeTargetGroup.HTTP_CODE_REGEX.match(http_codes) is None:
raise RESTError('InvalidParameterValue', 'HttpCode must be like 200 | 200-399 | 200,201 ...')
if http_codes is not None:
target_group.matcher['HttpCode'] = http_codes
if health_check_interval is not None:
target_group.healthcheck_interval_seconds = health_check_interval
if health_check_path is not None:
target_group.healthcheck_path = health_check_path
if health_check_port is not None:
target_group.healthcheck_port = health_check_port
if health_check_proto is not None:
target_group.healthcheck_protocol = health_check_proto
if health_check_timeout is not None:
target_group.healthcheck_timeout_seconds = health_check_timeout
if healthy_threshold_count is not None:
target_group.healthy_threshold_count = healthy_threshold_count
if unhealthy_threshold_count is not None:
target_group.unhealthy_threshold_count = unhealthy_threshold_count
return target_group
def modify_listener(self, arn, port=None, protocol=None, ssl_policy=None, certificates=None, default_actions=None):
for load_balancer in self.load_balancers.values():
if arn in load_balancer.listeners:
break
else:
raise ListenerNotFoundError()
listener = load_balancer.listeners[arn]
if port is not None:
for listener_arn, current_listener in load_balancer.listeners.items():
if listener_arn == arn:
continue
if listener.port == port:
raise DuplicateListenerError()
listener.port = port
if protocol is not None:
if protocol not in ('HTTP', 'HTTPS', 'TCP'):
raise RESTError('UnsupportedProtocol', 'Protocol {0} is not supported'.format(protocol))
# HTTPS checks
if protocol == 'HTTPS':
# HTTPS
# Might already be HTTPS so may not provide certs
if certificates is None and listener.protocol != 'HTTPS':
raise RESTError('InvalidConfigurationRequest', 'Certificates must be provided for HTTPS')
# Check certificates exist
if certificates is not None:
default_cert = None
all_certs = set() # for SNI
for cert in certificates:
if cert['is_default'] == 'true':
default_cert = cert['certificate_arn']
try:
self.acm_backend.get_certificate(cert['certificate_arn'])
except Exception:
raise RESTError('CertificateNotFound', 'Certificate {0} not found'.format(cert['certificate_arn']))
all_certs.add(cert['certificate_arn'])
if default_cert is None:
raise RESTError('InvalidConfigurationRequest', 'No default certificate')
listener.certificate = default_cert
listener.certificates = list(all_certs)
listener.protocol = protocol
if ssl_policy is not None:
# Its already validated in responses.py
listener.ssl_policy = ssl_policy
if default_actions is not None:
# Is currently not validated
listener.default_actions = default_actions
return listener
def _any_listener_using(self, target_group_arn):
for load_balancer in self.load_balancers.values():
for listener in load_balancer.listeners.values():

View File

@ -1,4 +1,6 @@
from __future__ import unicode_literals
from moto.core.exceptions import RESTError
from moto.core.utils import amzn_request_id
from moto.core.responses import BaseResponse
from .models import elbv2_backends
from .exceptions import DuplicateTagKeysError
@ -6,12 +8,131 @@ from .exceptions import LoadBalancerNotFoundError
from .exceptions import TargetGroupNotFoundError
class ELBV2Response(BaseResponse):
SSL_POLICIES = [
{
'name': 'ELBSecurityPolicy-2016-08',
'ssl_protocols': ['TLSv1', 'TLSv1.1', 'TLSv1.2'],
'ciphers': [
{'name': 'ECDHE-ECDSA-AES128-GCM-SHA256', 'priority': 1},
{'name': 'ECDHE-RSA-AES128-GCM-SHA256', 'priority': 2},
{'name': 'ECDHE-ECDSA-AES128-SHA256', 'priority': 3},
{'name': 'ECDHE-RSA-AES128-SHA256', 'priority': 4},
{'name': 'ECDHE-ECDSA-AES128-SHA', 'priority': 5},
{'name': 'ECDHE-RSA-AES128-SHA', 'priority': 6},
{'name': 'ECDHE-ECDSA-AES256-GCM-SHA384', 'priority': 7},
{'name': 'ECDHE-RSA-AES256-GCM-SHA384', 'priority': 8},
{'name': 'ECDHE-ECDSA-AES256-SHA384', 'priority': 9},
{'name': 'ECDHE-RSA-AES256-SHA384', 'priority': 10},
{'name': 'ECDHE-RSA-AES256-SHA', 'priority': 11},
{'name': 'ECDHE-ECDSA-AES256-SHA', 'priority': 12},
{'name': 'AES128-GCM-SHA256', 'priority': 13},
{'name': 'AES128-SHA256', 'priority': 14},
{'name': 'AES128-SHA', 'priority': 15},
{'name': 'AES256-GCM-SHA384', 'priority': 16},
{'name': 'AES256-SHA256', 'priority': 17},
{'name': 'AES256-SHA', 'priority': 18}
],
},
{
'name': 'ELBSecurityPolicy-TLS-1-2-2017-01',
'ssl_protocols': ['TLSv1.2'],
'ciphers': [
{'name': 'ECDHE-ECDSA-AES128-GCM-SHA256', 'priority': 1},
{'name': 'ECDHE-RSA-AES128-GCM-SHA256', 'priority': 2},
{'name': 'ECDHE-ECDSA-AES128-SHA256', 'priority': 3},
{'name': 'ECDHE-RSA-AES128-SHA256', 'priority': 4},
{'name': 'ECDHE-ECDSA-AES256-GCM-SHA384', 'priority': 5},
{'name': 'ECDHE-RSA-AES256-GCM-SHA384', 'priority': 6},
{'name': 'ECDHE-ECDSA-AES256-SHA384', 'priority': 7},
{'name': 'ECDHE-RSA-AES256-SHA384', 'priority': 8},
{'name': 'AES128-GCM-SHA256', 'priority': 9},
{'name': 'AES128-SHA256', 'priority': 10},
{'name': 'AES256-GCM-SHA384', 'priority': 11},
{'name': 'AES256-SHA256', 'priority': 12}
]
},
{
'name': 'ELBSecurityPolicy-TLS-1-1-2017-01',
'ssl_protocols': ['TLSv1.1', 'TLSv1.2'],
'ciphers': [
{'name': 'ECDHE-ECDSA-AES128-GCM-SHA256', 'priority': 1},
{'name': 'ECDHE-RSA-AES128-GCM-SHA256', 'priority': 2},
{'name': 'ECDHE-ECDSA-AES128-SHA256', 'priority': 3},
{'name': 'ECDHE-RSA-AES128-SHA256', 'priority': 4},
{'name': 'ECDHE-ECDSA-AES128-SHA', 'priority': 5},
{'name': 'ECDHE-RSA-AES128-SHA', 'priority': 6},
{'name': 'ECDHE-ECDSA-AES256-GCM-SHA384', 'priority': 7},
{'name': 'ECDHE-RSA-AES256-GCM-SHA384', 'priority': 8},
{'name': 'ECDHE-ECDSA-AES256-SHA384', 'priority': 9},
{'name': 'ECDHE-RSA-AES256-SHA384', 'priority': 10},
{'name': 'ECDHE-RSA-AES256-SHA', 'priority': 11},
{'name': 'ECDHE-ECDSA-AES256-SHA', 'priority': 12},
{'name': 'AES128-GCM-SHA256', 'priority': 13},
{'name': 'AES128-SHA256', 'priority': 14},
{'name': 'AES128-SHA', 'priority': 15},
{'name': 'AES256-GCM-SHA384', 'priority': 16},
{'name': 'AES256-SHA256', 'priority': 17},
{'name': 'AES256-SHA', 'priority': 18}
]
},
{
'name': 'ELBSecurityPolicy-2015-05',
'ssl_protocols': ['TLSv1', 'TLSv1.1', 'TLSv1.2'],
'ciphers': [
{'name': 'ECDHE-ECDSA-AES128-GCM-SHA256', 'priority': 1},
{'name': 'ECDHE-RSA-AES128-GCM-SHA256', 'priority': 2},
{'name': 'ECDHE-ECDSA-AES128-SHA256', 'priority': 3},
{'name': 'ECDHE-RSA-AES128-SHA256', 'priority': 4},
{'name': 'ECDHE-ECDSA-AES128-SHA', 'priority': 5},
{'name': 'ECDHE-RSA-AES128-SHA', 'priority': 6},
{'name': 'ECDHE-ECDSA-AES256-GCM-SHA384', 'priority': 7},
{'name': 'ECDHE-RSA-AES256-GCM-SHA384', 'priority': 8},
{'name': 'ECDHE-ECDSA-AES256-SHA384', 'priority': 9},
{'name': 'ECDHE-RSA-AES256-SHA384', 'priority': 10},
{'name': 'ECDHE-RSA-AES256-SHA', 'priority': 11},
{'name': 'ECDHE-ECDSA-AES256-SHA', 'priority': 12},
{'name': 'AES128-GCM-SHA256', 'priority': 13},
{'name': 'AES128-SHA256', 'priority': 14},
{'name': 'AES128-SHA', 'priority': 15},
{'name': 'AES256-GCM-SHA384', 'priority': 16},
{'name': 'AES256-SHA256', 'priority': 17},
{'name': 'AES256-SHA', 'priority': 18}
]
},
{
'name': 'ELBSecurityPolicy-TLS-1-0-2015-04',
'ssl_protocols': ['TLSv1', 'TLSv1.1', 'TLSv1.2'],
'ciphers': [
{'name': 'ECDHE-ECDSA-AES128-GCM-SHA256', 'priority': 1},
{'name': 'ECDHE-RSA-AES128-GCM-SHA256', 'priority': 2},
{'name': 'ECDHE-ECDSA-AES128-SHA256', 'priority': 3},
{'name': 'ECDHE-RSA-AES128-SHA256', 'priority': 4},
{'name': 'ECDHE-ECDSA-AES128-SHA', 'priority': 5},
{'name': 'ECDHE-RSA-AES128-SHA', 'priority': 6},
{'name': 'ECDHE-ECDSA-AES256-GCM-SHA384', 'priority': 7},
{'name': 'ECDHE-RSA-AES256-GCM-SHA384', 'priority': 8},
{'name': 'ECDHE-ECDSA-AES256-SHA384', 'priority': 9},
{'name': 'ECDHE-RSA-AES256-SHA384', 'priority': 10},
{'name': 'ECDHE-RSA-AES256-SHA', 'priority': 11},
{'name': 'ECDHE-ECDSA-AES256-SHA', 'priority': 12},
{'name': 'AES128-GCM-SHA256', 'priority': 13},
{'name': 'AES128-SHA256', 'priority': 14},
{'name': 'AES128-SHA', 'priority': 15},
{'name': 'AES256-GCM-SHA384', 'priority': 16},
{'name': 'AES256-SHA256', 'priority': 17},
{'name': 'AES256-SHA', 'priority': 18},
{'name': 'DES-CBC3-SHA', 'priority': 19}
]
}
]
class ELBV2Response(BaseResponse):
@property
def elbv2_backend(self):
return elbv2_backends[self.region]
@amzn_request_id
def create_load_balancer(self):
load_balancer_name = self._get_param('Name')
subnet_ids = self._get_multi_param("Subnets.member")
@ -28,6 +149,7 @@ class ELBV2Response(BaseResponse):
template = self.response_template(CREATE_LOAD_BALANCER_TEMPLATE)
return template.render(load_balancer=load_balancer)
@amzn_request_id
def create_rule(self):
lister_arn = self._get_param('ListenerArn')
_conditions = self._get_list_prefix('Conditions.member')
@ -52,18 +174,20 @@ class ELBV2Response(BaseResponse):
template = self.response_template(CREATE_RULE_TEMPLATE)
return template.render(rules=rules)
@amzn_request_id
def create_target_group(self):
name = self._get_param('Name')
vpc_id = self._get_param('VpcId')
protocol = self._get_param('Protocol')
port = self._get_param('Port')
healthcheck_protocol = self._get_param('HealthCheckProtocol', 'HTTP')
healthcheck_port = self._get_param('HealthCheckPort', 'traffic-port')
healthcheck_path = self._get_param('HealthCheckPath', '/')
healthcheck_interval_seconds = self._get_param('HealthCheckIntervalSeconds', '30')
healthcheck_timeout_seconds = self._get_param('HealthCheckTimeoutSeconds', '5')
healthy_threshold_count = self._get_param('HealthyThresholdCount', '5')
unhealthy_threshold_count = self._get_param('UnhealthyThresholdCount', '2')
healthcheck_protocol = self._get_param('HealthCheckProtocol')
healthcheck_port = self._get_param('HealthCheckPort')
healthcheck_path = self._get_param('HealthCheckPath')
healthcheck_interval_seconds = self._get_param('HealthCheckIntervalSeconds')
healthcheck_timeout_seconds = self._get_param('HealthCheckTimeoutSeconds')
healthy_threshold_count = self._get_param('HealthyThresholdCount')
unhealthy_threshold_count = self._get_param('UnhealthyThresholdCount')
matcher = self._get_param('Matcher')
target_group = self.elbv2_backend.create_target_group(
name,
@ -77,11 +201,13 @@ class ELBV2Response(BaseResponse):
healthcheck_timeout_seconds=healthcheck_timeout_seconds,
healthy_threshold_count=healthy_threshold_count,
unhealthy_threshold_count=unhealthy_threshold_count,
matcher=matcher,
)
template = self.response_template(CREATE_TARGET_GROUP_TEMPLATE)
return template.render(target_group=target_group)
@amzn_request_id
def create_listener(self):
load_balancer_arn = self._get_param('LoadBalancerArn')
protocol = self._get_param('Protocol')
@ -105,6 +231,7 @@ class ELBV2Response(BaseResponse):
template = self.response_template(CREATE_LISTENER_TEMPLATE)
return template.render(listener=listener)
@amzn_request_id
def describe_load_balancers(self):
arns = self._get_multi_param("LoadBalancerArns.member")
names = self._get_multi_param("Names.member")
@ -124,6 +251,7 @@ class ELBV2Response(BaseResponse):
template = self.response_template(DESCRIBE_LOAD_BALANCERS_TEMPLATE)
return template.render(load_balancers=load_balancers_resp, marker=next_marker)
@amzn_request_id
def describe_rules(self):
listener_arn = self._get_param('ListenerArn')
rule_arns = self._get_multi_param('RuleArns.member') if any(k for k in list(self.querystring.keys()) if k.startswith('RuleArns.member')) else None
@ -144,6 +272,7 @@ class ELBV2Response(BaseResponse):
template = self.response_template(DESCRIBE_RULES_TEMPLATE)
return template.render(rules=rules_resp, marker=next_marker)
@amzn_request_id
def describe_target_groups(self):
load_balancer_arn = self._get_param('LoadBalancerArn')
target_group_arns = self._get_multi_param('TargetGroupArns.member')
@ -153,6 +282,7 @@ class ELBV2Response(BaseResponse):
template = self.response_template(DESCRIBE_TARGET_GROUPS_TEMPLATE)
return template.render(target_groups=target_groups)
@amzn_request_id
def describe_target_group_attributes(self):
target_group_arn = self._get_param('TargetGroupArn')
target_group = self.elbv2_backend.target_groups.get(target_group_arn)
@ -161,6 +291,7 @@ class ELBV2Response(BaseResponse):
template = self.response_template(DESCRIBE_TARGET_GROUP_ATTRIBUTES_TEMPLATE)
return template.render(attributes=target_group.attributes)
@amzn_request_id
def describe_listeners(self):
load_balancer_arn = self._get_param('LoadBalancerArn')
listener_arns = self._get_multi_param('ListenerArns.member')
@ -171,30 +302,35 @@ class ELBV2Response(BaseResponse):
template = self.response_template(DESCRIBE_LISTENERS_TEMPLATE)
return template.render(listeners=listeners)
@amzn_request_id
def delete_load_balancer(self):
arn = self._get_param('LoadBalancerArn')
self.elbv2_backend.delete_load_balancer(arn)
template = self.response_template(DELETE_LOAD_BALANCER_TEMPLATE)
return template.render()
@amzn_request_id
def delete_rule(self):
arn = self._get_param('RuleArn')
self.elbv2_backend.delete_rule(arn)
template = self.response_template(DELETE_RULE_TEMPLATE)
return template.render()
@amzn_request_id
def delete_target_group(self):
arn = self._get_param('TargetGroupArn')
self.elbv2_backend.delete_target_group(arn)
template = self.response_template(DELETE_TARGET_GROUP_TEMPLATE)
return template.render()
@amzn_request_id
def delete_listener(self):
arn = self._get_param('ListenerArn')
self.elbv2_backend.delete_listener(arn)
template = self.response_template(DELETE_LISTENER_TEMPLATE)
return template.render()
@amzn_request_id
def modify_rule(self):
rule_arn = self._get_param('RuleArn')
_conditions = self._get_list_prefix('Conditions.member')
@ -217,6 +353,7 @@ class ELBV2Response(BaseResponse):
template = self.response_template(MODIFY_RULE_TEMPLATE)
return template.render(rules=rules)
@amzn_request_id
def modify_target_group_attributes(self):
target_group_arn = self._get_param('TargetGroupArn')
target_group = self.elbv2_backend.target_groups.get(target_group_arn)
@ -230,6 +367,7 @@ class ELBV2Response(BaseResponse):
template = self.response_template(MODIFY_TARGET_GROUP_ATTRIBUTES_TEMPLATE)
return template.render(attributes=attributes)
@amzn_request_id
def register_targets(self):
target_group_arn = self._get_param('TargetGroupArn')
targets = self._get_list_prefix('Targets.member')
@ -238,6 +376,7 @@ class ELBV2Response(BaseResponse):
template = self.response_template(REGISTER_TARGETS_TEMPLATE)
return template.render()
@amzn_request_id
def deregister_targets(self):
target_group_arn = self._get_param('TargetGroupArn')
targets = self._get_list_prefix('Targets.member')
@ -246,6 +385,7 @@ class ELBV2Response(BaseResponse):
template = self.response_template(DEREGISTER_TARGETS_TEMPLATE)
return template.render()
@amzn_request_id
def describe_target_health(self):
target_group_arn = self._get_param('TargetGroupArn')
targets = self._get_list_prefix('Targets.member')
@ -254,6 +394,7 @@ class ELBV2Response(BaseResponse):
template = self.response_template(DESCRIBE_TARGET_HEALTH_TEMPLATE)
return template.render(target_health_descriptions=target_health_descriptions)
@amzn_request_id
def set_rule_priorities(self):
rule_priorities = self._get_list_prefix('RulePriorities.member')
for rule_priority in rule_priorities:
@ -262,6 +403,7 @@ class ELBV2Response(BaseResponse):
template = self.response_template(SET_RULE_PRIORITIES_TEMPLATE)
return template.render(rules=rules)
@amzn_request_id
def add_tags(self):
resource_arns = self._get_multi_param('ResourceArns.member')
@ -281,6 +423,7 @@ class ELBV2Response(BaseResponse):
template = self.response_template(ADD_TAGS_TEMPLATE)
return template.render()
@amzn_request_id
def remove_tags(self):
resource_arns = self._get_multi_param('ResourceArns.member')
tag_keys = self._get_multi_param('TagKeys.member')
@ -301,6 +444,7 @@ class ELBV2Response(BaseResponse):
template = self.response_template(REMOVE_TAGS_TEMPLATE)
return template.render()
@amzn_request_id
def describe_tags(self):
resource_arns = self._get_multi_param('ResourceArns.member')
resources = []
@ -320,6 +464,125 @@ class ELBV2Response(BaseResponse):
template = self.response_template(DESCRIBE_TAGS_TEMPLATE)
return template.render(resources=resources)
@amzn_request_id
def describe_account_limits(self):
# Supports paging but not worth implementing yet
# marker = self._get_param('Marker')
# page_size = self._get_param('PageSize')
limits = {
'application-load-balancers': 20,
'target-groups': 3000,
'targets-per-application-load-balancer': 30,
'listeners-per-application-load-balancer': 50,
'rules-per-application-load-balancer': 100,
'network-load-balancers': 20,
'targets-per-network-load-balancer': 200,
'listeners-per-network-load-balancer': 50
}
template = self.response_template(DESCRIBE_LIMITS_TEMPLATE)
return template.render(limits=limits)
@amzn_request_id
def describe_ssl_policies(self):
names = self._get_multi_param('Names.member.')
# Supports paging but not worth implementing yet
# marker = self._get_param('Marker')
# page_size = self._get_param('PageSize')
policies = SSL_POLICIES
if names:
policies = filter(lambda policy: policy['name'] in names, policies)
template = self.response_template(DESCRIBE_SSL_POLICIES_TEMPLATE)
return template.render(policies=policies)
@amzn_request_id
def set_ip_address_type(self):
arn = self._get_param('LoadBalancerArn')
ip_type = self._get_param('IpAddressType')
self.elbv2_backend.set_ip_address_type(arn, ip_type)
template = self.response_template(SET_IP_ADDRESS_TYPE_TEMPLATE)
return template.render(ip_type=ip_type)
@amzn_request_id
def set_security_groups(self):
arn = self._get_param('LoadBalancerArn')
sec_groups = self._get_multi_param('SecurityGroups.member.')
self.elbv2_backend.set_security_groups(arn, sec_groups)
template = self.response_template(SET_SECURITY_GROUPS_TEMPLATE)
return template.render(sec_groups=sec_groups)
@amzn_request_id
def set_subnets(self):
arn = self._get_param('LoadBalancerArn')
subnets = self._get_multi_param('Subnets.member.')
subnet_zone_list = self.elbv2_backend.set_subnets(arn, subnets)
template = self.response_template(SET_SUBNETS_TEMPLATE)
return template.render(subnets=subnet_zone_list)
@amzn_request_id
def modify_load_balancer_attributes(self):
arn = self._get_param('LoadBalancerArn')
attrs = self._get_map_prefix('Attributes.member', key_end='Key', value_end='Value')
all_attrs = self.elbv2_backend.modify_load_balancer_attributes(arn, attrs)
template = self.response_template(MODIFY_LOADBALANCER_ATTRS_TEMPLATE)
return template.render(attrs=all_attrs)
@amzn_request_id
def describe_load_balancer_attributes(self):
arn = self._get_param('LoadBalancerArn')
attrs = self.elbv2_backend.describe_load_balancer_attributes(arn)
template = self.response_template(DESCRIBE_LOADBALANCER_ATTRS_TEMPLATE)
return template.render(attrs=attrs)
@amzn_request_id
def modify_target_group(self):
arn = self._get_param('TargetGroupArn')
health_check_proto = self._get_param('HealthCheckProtocol') # 'HTTP' | 'HTTPS' | 'TCP',
health_check_port = self._get_param('HealthCheckPort')
health_check_path = self._get_param('HealthCheckPath')
health_check_interval = self._get_param('HealthCheckIntervalSeconds')
health_check_timeout = self._get_param('HealthCheckTimeoutSeconds')
healthy_threshold_count = self._get_param('HealthyThresholdCount')
unhealthy_threshold_count = self._get_param('UnhealthyThresholdCount')
http_codes = self._get_param('Matcher.HttpCode')
target_group = self.elbv2_backend.modify_target_group(arn, health_check_proto, health_check_port, health_check_path, health_check_interval,
health_check_timeout, healthy_threshold_count, unhealthy_threshold_count, http_codes)
template = self.response_template(MODIFY_TARGET_GROUP_TEMPLATE)
return template.render(target_group=target_group)
@amzn_request_id
def modify_listener(self):
arn = self._get_param('ListenerArn')
port = self._get_param('Port')
protocol = self._get_param('Protocol')
ssl_policy = self._get_param('SslPolicy')
certificates = self._get_list_prefix('Certificates.member')
default_actions = self._get_list_prefix('DefaultActions.member')
# Should really move SSL Policies to models
if ssl_policy is not None and ssl_policy not in [item['name'] for item in SSL_POLICIES]:
raise RESTError('SSLPolicyNotFound', 'Policy {0} not found'.format(ssl_policy))
listener = self.elbv2_backend.modify_listener(arn, port, protocol, ssl_policy, certificates, default_actions)
template = self.response_template(MODIFY_LISTENER_TEMPLATE)
return template.render(listener=listener)
def _add_tags(self, resource):
tag_values = []
tag_keys = []
@ -348,14 +611,14 @@ class ELBV2Response(BaseResponse):
ADD_TAGS_TEMPLATE = """<AddTagsResponse xmlns="http://elasticloadbalancing.amazonaws.com/doc/2015-12-01/">
<AddTagsResult/>
<ResponseMetadata>
<RequestId>360e81f7-1100-11e4-b6ed-0f30EXAMPLE</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</AddTagsResponse>"""
REMOVE_TAGS_TEMPLATE = """<RemoveTagsResponse xmlns="http://elasticloadbalancing.amazonaws.com/doc/2015-12-01/">
<RemoveTagsResult/>
<ResponseMetadata>
<RequestId>360e81f7-1100-11e4-b6ed-0f30EXAMPLE</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</RemoveTagsResponse>"""
@ -378,11 +641,10 @@ DESCRIBE_TAGS_TEMPLATE = """<DescribeTagsResponse xmlns="http://elasticloadbalan
</TagDescriptions>
</DescribeTagsResult>
<ResponseMetadata>
<RequestId>360e81f7-1100-11e4-b6ed-0f30EXAMPLE</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</DescribeTagsResponse>"""
CREATE_LOAD_BALANCER_TEMPLATE = """<CreateLoadBalancerResponse xmlns="http://elasticloadbalancing.amazonaws.com/doc/2015-12-01/">
<CreateLoadBalancerResult>
<LoadBalancers>
@ -415,7 +677,7 @@ CREATE_LOAD_BALANCER_TEMPLATE = """<CreateLoadBalancerResponse xmlns="http://ela
</LoadBalancers>
</CreateLoadBalancerResult>
<ResponseMetadata>
<RequestId>32d531b2-f2d0-11e5-9192-3fff33344cfa</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</CreateLoadBalancerResponse>"""
@ -452,7 +714,7 @@ CREATE_RULE_TEMPLATE = """<CreateRuleResponse xmlns="http://elasticloadbalancing
</Rules>
</CreateRuleResult>
<ResponseMetadata>
<RequestId>c5478c83-f397-11e5-bb98-57195a6eb84a</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</CreateRuleResponse>"""
@ -472,14 +734,19 @@ CREATE_TARGET_GROUP_TEMPLATE = """<CreateTargetGroupResponse xmlns="http://elast
<HealthCheckTimeoutSeconds>{{ target_group.healthcheck_timeout_seconds }}</HealthCheckTimeoutSeconds>
<HealthyThresholdCount>{{ target_group.healthy_threshold_count }}</HealthyThresholdCount>
<UnhealthyThresholdCount>{{ target_group.unhealthy_threshold_count }}</UnhealthyThresholdCount>
{% if target_group.matcher %}
<Matcher>
<HttpCode>200</HttpCode>
<HttpCode>{{ target_group.matcher['HttpCode'] }}</HttpCode>
</Matcher>
{% endif %}
{% if target_group.target_type %}
<TargetType>{{ target_group.target_type }}</TargetType>
{% endif %}
</member>
</TargetGroups>
</CreateTargetGroupResult>
<ResponseMetadata>
<RequestId>b83fe90e-f2d5-11e5-b95d-3b2c1831fc26</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</CreateTargetGroupResponse>"""
@ -489,11 +756,13 @@ CREATE_LISTENER_TEMPLATE = """<CreateListenerResponse xmlns="http://elasticloadb
<member>
<LoadBalancerArn>{{ listener.load_balancer_arn }}</LoadBalancerArn>
<Protocol>{{ listener.protocol }}</Protocol>
{% if listener.certificate %}
{% if listener.certificates %}
<Certificates>
{% for cert in listener.certificates %}
<member>
<CertificateArn>{{ listener.certificate }}</CertificateArn>
<CertificateArn>{{ cert }}</CertificateArn>
</member>
{% endfor %}
</Certificates>
{% endif %}
<Port>{{ listener.port }}</Port>
@ -511,35 +780,35 @@ CREATE_LISTENER_TEMPLATE = """<CreateListenerResponse xmlns="http://elasticloadb
</Listeners>
</CreateListenerResult>
<ResponseMetadata>
<RequestId>97f1bb38-f390-11e5-b95d-3b2c1831fc26</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</CreateListenerResponse>"""
DELETE_LOAD_BALANCER_TEMPLATE = """<DeleteLoadBalancerResponse xmlns="http://elasticloadbalancing.amazonaws.com/doc/2015-12-01/">
<DeleteLoadBalancerResult/>
<ResponseMetadata>
<RequestId>1549581b-12b7-11e3-895e-1334aEXAMPLE</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</DeleteLoadBalancerResponse>"""
DELETE_RULE_TEMPLATE = """<DeleteRuleResponse xmlns="http://elasticloadbalancing.amazonaws.com/doc/2015-12-01/">
<DeleteRuleResult/>
<ResponseMetadata>
<RequestId>1549581b-12b7-11e3-895e-1334aEXAMPLE</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</DeleteRuleResponse>"""
DELETE_TARGET_GROUP_TEMPLATE = """<DeleteTargetGroupResponse xmlns="http://elasticloadbalancing.amazonaws.com/doc/2015-12-01/">
<DeleteTargetGroupResult/>
<ResponseMetadata>
<RequestId>1549581b-12b7-11e3-895e-1334aEXAMPLE</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</DeleteTargetGroupResponse>"""
DELETE_LISTENER_TEMPLATE = """<DeleteListenerResponse xmlns="http://elasticloadbalancing.amazonaws.com/doc/2015-12-01/">
<DeleteListenerResult/>
<ResponseMetadata>
<RequestId>1549581b-12b7-11e3-895e-1334aEXAMPLE</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</DeleteListenerResponse>"""
@ -572,6 +841,7 @@ DESCRIBE_LOAD_BALANCERS_TEMPLATE = """<DescribeLoadBalancersResponse xmlns="http
<Code>provisioning</Code>
</State>
<Type>application</Type>
<IpAddressType>ipv4</IpAddressType>
</member>
{% endfor %}
</LoadBalancers>
@ -580,7 +850,7 @@ DESCRIBE_LOAD_BALANCERS_TEMPLATE = """<DescribeLoadBalancersResponse xmlns="http
{% endif %}
</DescribeLoadBalancersResult>
<ResponseMetadata>
<RequestId>f9880f01-7852-629d-a6c3-3ae2-666a409287e6dc0c</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</DescribeLoadBalancersResponse>"""
@ -620,7 +890,7 @@ DESCRIBE_RULES_TEMPLATE = """<DescribeRulesResponse xmlns="http://elasticloadbal
{% endif %}
</DescribeRulesResult>
<ResponseMetadata>
<RequestId>74926cf3-f3a3-11e5-b543-9f2c3fbb9bee</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</DescribeRulesResponse>"""
@ -634,16 +904,21 @@ DESCRIBE_TARGET_GROUPS_TEMPLATE = """<DescribeTargetGroupsResponse xmlns="http:/
<Protocol>{{ target_group.protocol }}</Protocol>
<Port>{{ target_group.port }}</Port>
<VpcId>{{ target_group.vpc_id }}</VpcId>
<HealthCheckProtocol>{{ target_group.health_check_protocol }}</HealthCheckProtocol>
<HealthCheckProtocol>{{ target_group.healthcheck_protocol }}</HealthCheckProtocol>
<HealthCheckPort>{{ target_group.healthcheck_port }}</HealthCheckPort>
<HealthCheckPath>{{ target_group.healthcheck_path }}</HealthCheckPath>
<HealthCheckIntervalSeconds>{{ target_group.healthcheck_interval_seconds }}</HealthCheckIntervalSeconds>
<HealthCheckTimeoutSeconds>{{ target_group.healthcheck_timeout_seconds }}</HealthCheckTimeoutSeconds>
<HealthyThresholdCount>{{ target_group.healthy_threshold_count }}</HealthyThresholdCount>
<UnhealthyThresholdCount>{{ target_group.unhealthy_threshold_count }}</UnhealthyThresholdCount>
{% if target_group.matcher %}
<Matcher>
<HttpCode>200</HttpCode>
<HttpCode>{{ target_group.matcher['HttpCode'] }}</HttpCode>
</Matcher>
{% endif %}
{% if target_group.target_type %}
<TargetType>{{ target_group.target_type }}</TargetType>
{% endif %}
<LoadBalancerArns>
{% for load_balancer_arn in target_group.load_balancer_arns %}
<member>{{ load_balancer_arn }}</member>
@ -654,11 +929,10 @@ DESCRIBE_TARGET_GROUPS_TEMPLATE = """<DescribeTargetGroupsResponse xmlns="http:/
</TargetGroups>
</DescribeTargetGroupsResult>
<ResponseMetadata>
<RequestId>70092c0e-f3a9-11e5-ae48-cff02092876b</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</DescribeTargetGroupsResponse>"""
DESCRIBE_TARGET_GROUP_ATTRIBUTES_TEMPLATE = """<DescribeTargetGroupAttributesResponse xmlns="http://elasticloadbalancing.amazonaws.com/doc/2015-12-01/">
<DescribeTargetGroupAttributesResult>
<Attributes>
@ -671,11 +945,10 @@ DESCRIBE_TARGET_GROUP_ATTRIBUTES_TEMPLATE = """<DescribeTargetGroupAttributesRes
</Attributes>
</DescribeTargetGroupAttributesResult>
<ResponseMetadata>
<RequestId>70092c0e-f3a9-11e5-ae48-cff02092876b</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</DescribeTargetGroupAttributesResponse>"""
DESCRIBE_LISTENERS_TEMPLATE = """<DescribeLoadBalancersResponse xmlns="http://elasticloadbalancing.amazonaws.com/doc/2015-12-01/">
<DescribeListenersResult>
<Listeners>
@ -706,7 +979,7 @@ DESCRIBE_LISTENERS_TEMPLATE = """<DescribeLoadBalancersResponse xmlns="http://el
</Listeners>
</DescribeListenersResult>
<ResponseMetadata>
<RequestId>65a3a7ea-f39c-11e5-b543-9f2c3fbb9bee</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</DescribeLoadBalancersResponse>"""
@ -721,7 +994,7 @@ CONFIGURE_HEALTH_CHECK_TEMPLATE = """<ConfigureHealthCheckResponse xmlns="http:/
</HealthCheck>
</ConfigureHealthCheckResult>
<ResponseMetadata>
<RequestId>f9880f01-7852-629d-a6c3-3ae2-666a409287e6dc0c</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</ConfigureHealthCheckResponse>"""
@ -758,7 +1031,7 @@ MODIFY_RULE_TEMPLATE = """<ModifyRuleResponse xmlns="http://elasticloadbalancing
</Rules>
</ModifyRuleResult>
<ResponseMetadata>
<RequestId>c5478c83-f397-11e5-bb98-57195a6eb84a</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</ModifyRuleResponse>"""
@ -774,7 +1047,7 @@ MODIFY_TARGET_GROUP_ATTRIBUTES_TEMPLATE = """<ModifyTargetGroupAttributesRespons
</Attributes>
</ModifyTargetGroupAttributesResult>
<ResponseMetadata>
<RequestId>70092c0e-f3a9-11e5-ae48-cff02092876b</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</ModifyTargetGroupAttributesResponse>"""
@ -782,7 +1055,7 @@ REGISTER_TARGETS_TEMPLATE = """<RegisterTargetsResponse xmlns="http://elasticloa
<RegisterTargetsResult>
</RegisterTargetsResult>
<ResponseMetadata>
<RequestId>f9880f01-7852-629d-a6c3-3ae2-666a409287e6dc0c</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</RegisterTargetsResponse>"""
@ -790,22 +1063,21 @@ DEREGISTER_TARGETS_TEMPLATE = """<DeregisterTargetsResponse xmlns="http://elasti
<DeregisterTargetsResult>
</DeregisterTargetsResult>
<ResponseMetadata>
<RequestId>f9880f01-7852-629d-a6c3-3ae2-666a409287e6dc0c</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</DeregisterTargetsResponse>"""
SET_LOAD_BALANCER_SSL_CERTIFICATE = """<SetLoadBalancerListenerSSLCertificateResponse xmlns="http://elasticloadbalan cing.amazonaws.com/doc/2015-12-01/">
<SetLoadBalancerListenerSSLCertificateResult/>
<ResponseMetadata>
<RequestId>83c88b9d-12b7-11e3-8b82-87b12EXAMPLE</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</SetLoadBalancerListenerSSLCertificateResponse>"""
DELETE_LOAD_BALANCER_LISTENERS = """<DeleteLoadBalancerListenersResponse xmlns="http://elasticloadbalan cing.amazonaws.com/doc/2015-12-01/">
<DeleteLoadBalancerListenersResult/>
<ResponseMetadata>
<RequestId>83c88b9d-12b7-11e3-8b82-87b12EXAMPLE</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</DeleteLoadBalancerListenersResponse>"""
@ -837,7 +1109,7 @@ DESCRIBE_ATTRIBUTES_TEMPLATE = """<DescribeLoadBalancerAttributesResponse xmlns
</LoadBalancerAttributes>
</DescribeLoadBalancerAttributesResult>
<ResponseMetadata>
<RequestId>83c88b9d-12b7-11e3-8b82-87b12EXAMPLE</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</DescribeLoadBalancerAttributesResponse>
"""
@ -871,7 +1143,7 @@ MODIFY_ATTRIBUTES_TEMPLATE = """<ModifyLoadBalancerAttributesResponse xmlns="htt
</LoadBalancerAttributes>
</ModifyLoadBalancerAttributesResult>
<ResponseMetadata>
<RequestId>83c88b9d-12b7-11e3-8b82-87b12EXAMPLE</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</ModifyLoadBalancerAttributesResponse>
"""
@ -879,7 +1151,7 @@ MODIFY_ATTRIBUTES_TEMPLATE = """<ModifyLoadBalancerAttributesResponse xmlns="htt
CREATE_LOAD_BALANCER_POLICY_TEMPLATE = """<CreateLoadBalancerPolicyResponse xmlns="http://elasticloadbalancing.amazonaws.com/doc/2015-12-01/">
<CreateLoadBalancerPolicyResult/>
<ResponseMetadata>
<RequestId>83c88b9d-12b7-11e3-8b82-87b12EXAMPLE</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</CreateLoadBalancerPolicyResponse>
"""
@ -887,7 +1159,7 @@ CREATE_LOAD_BALANCER_POLICY_TEMPLATE = """<CreateLoadBalancerPolicyResponse xmln
SET_LOAD_BALANCER_POLICIES_OF_LISTENER_TEMPLATE = """<SetLoadBalancerPoliciesOfListenerResponse xmlns="http://elasticloadbalancing.amazonaws.com/doc/2015-12-01/">
<SetLoadBalancerPoliciesOfListenerResult/>
<ResponseMetadata>
<RequestId>07b1ecbc-1100-11e3-acaf-dd7edEXAMPLE</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</SetLoadBalancerPoliciesOfListenerResponse>
"""
@ -895,7 +1167,7 @@ SET_LOAD_BALANCER_POLICIES_OF_LISTENER_TEMPLATE = """<SetLoadBalancerPoliciesOfL
SET_LOAD_BALANCER_POLICIES_FOR_BACKEND_SERVER_TEMPLATE = """<SetLoadBalancerPoliciesForBackendServerResponse xmlns="http://elasticloadbalancing.amazonaws.com/doc/2015-12-01/">
<SetLoadBalancerPoliciesForBackendServerResult/>
<ResponseMetadata>
<RequestId>0eb9b381-dde0-11e2-8d78-6ddbaEXAMPLE</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</SetLoadBalancerPoliciesForBackendServerResponse>
"""
@ -918,7 +1190,7 @@ DESCRIBE_TARGET_HEALTH_TEMPLATE = """<DescribeTargetHealthResponse xmlns="http:/
</TargetHealthDescriptions>
</DescribeTargetHealthResult>
<ResponseMetadata>
<RequestId>c534f810-f389-11e5-9192-3fff33344cfa</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</DescribeTargetHealthResponse>"""
@ -955,6 +1227,186 @@ SET_RULE_PRIORITIES_TEMPLATE = """<SetRulePrioritiesResponse xmlns="http://elast
</Rules>
</SetRulePrioritiesResult>
<ResponseMetadata>
<RequestId>4d7a8036-f3a7-11e5-9c02-8fd20490d5a6</RequestId>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</SetRulePrioritiesResponse>"""
DESCRIBE_LIMITS_TEMPLATE = """<DescribeAccountLimitsResponse xmlns="http://elasticloadbalancing.amazonaws.com/doc/2015-12-01/">
<DescribeAccountLimitsResult>
<Limits>
{% for key, value in limits.items() %}
<member>
<Name>{{ key }}</Name>
<Max>{{ value }}</Max>
</member>
{% endfor %}
</Limits>
</DescribeAccountLimitsResult>
<ResponseMetadata>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</DescribeAccountLimitsResponse>"""
DESCRIBE_SSL_POLICIES_TEMPLATE = """<DescribeSSLPoliciesResponse xmlns="http://elasticloadbalancing.amazonaws.com/doc/2015-12-01/">
<DescribeSSLPoliciesResult>
<SslPolicies>
{% for policy in policies %}
<member>
<Name>{{ policy['name'] }}</Name>
<Ciphers>
{% for cipher in policy['ciphers'] %}
<member>
<Name>{{ cipher['name'] }}</Name>
<Priority>{{ cipher['priority'] }}</Priority>
</member>
{% endfor %}
</Ciphers>
<SslProtocols>
{% for proto in policy['ssl_protocols'] %}
<member>{{ proto }}</member>
{% endfor %}
</SslProtocols>
</member>
{% endfor %}
</SslPolicies>
</DescribeSSLPoliciesResult>
<ResponseMetadata>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</DescribeSSLPoliciesResponse>"""
SET_IP_ADDRESS_TYPE_TEMPLATE = """<SetIpAddressTypeResponse xmlns="http://elasticloadbalancing.amazonaws.com/doc/2015-12-01/">
<SetIpAddressTypeResult>
<IpAddressType>{{ ip_type }}</IpAddressType>
</SetIpAddressTypeResult>
<ResponseMetadata>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</SetIpAddressTypeResponse>"""
SET_SECURITY_GROUPS_TEMPLATE = """<SetSecurityGroupsResponse xmlns="http://elasticloadbalancing.amazonaws.com/doc/2015-12-01/">
<SetSecurityGroupsResult>
<SecurityGroupIds>
{% for group in sec_groups %}
<member>{{ group }}</member>
{% endfor %}
</SecurityGroupIds>
</SetSecurityGroupsResult>
<ResponseMetadata>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</SetSecurityGroupsResponse>"""
SET_SUBNETS_TEMPLATE = """<SetSubnetsResponse xmlns="http://elasticloadbalancing.amazonaws.com/doc/2015-12-01/">
<SetSubnetsResult>
<AvailabilityZones>
{% for zone_id, subnet_id in subnets %}
<member>
<SubnetId>{{ subnet_id }}</SubnetId>
<ZoneName>{{ zone_id }}</ZoneName>
</member>
{% endfor %}
</AvailabilityZones>
</SetSubnetsResult>
<ResponseMetadata>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</SetSubnetsResponse>"""
MODIFY_LOADBALANCER_ATTRS_TEMPLATE = """<ModifyLoadBalancerAttributesResponse xmlns="http://elasticloadbalancing.amazonaws.com/doc/2015-12-01/">
<ModifyLoadBalancerAttributesResult>
<Attributes>
{% for key, value in attrs.items() %}
<member>
{% if value == None %}<Value />{% else %}<Value>{{ value }}</Value>{% endif %}
<Key>{{ key }}</Key>
</member>
{% endfor %}
</Attributes>
</ModifyLoadBalancerAttributesResult>
<ResponseMetadata>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</ModifyLoadBalancerAttributesResponse>"""
DESCRIBE_LOADBALANCER_ATTRS_TEMPLATE = """<DescribeLoadBalancerAttributesResponse xmlns="http://elasticloadbalancing.amazonaws.com/doc/2015-12-01/">
<DescribeLoadBalancerAttributesResult>
<Attributes>
{% for key, value in attrs.items() %}
<member>
{% if value == None %}<Value />{% else %}<Value>{{ value }}</Value>{% endif %}
<Key>{{ key }}</Key>
</member>
{% endfor %}
</Attributes>
</DescribeLoadBalancerAttributesResult>
<ResponseMetadata>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</DescribeLoadBalancerAttributesResponse>"""
MODIFY_TARGET_GROUP_TEMPLATE = """<ModifyTargetGroupResponse xmlns="http://elasticloadbalancing.amazonaws.com/doc/2015-12-01/">
<ModifyTargetGroupResult>
<TargetGroups>
<member>
<TargetGroupArn>{{ target_group.arn }}</TargetGroupArn>
<TargetGroupName>{{ target_group.name }}</TargetGroupName>
<Protocol>{{ target_group.protocol }}</Protocol>
<Port>{{ target_group.port }}</Port>
<VpcId>{{ target_group.vpc_id }}</VpcId>
<HealthCheckProtocol>{{ target_group.healthcheck_protocol }}</HealthCheckProtocol>
<HealthCheckPort>{{ target_group.healthcheck_port }}</HealthCheckPort>
<HealthCheckPath>{{ target_group.healthcheck_path }}</HealthCheckPath>
<HealthCheckIntervalSeconds>{{ target_group.healthcheck_interval_seconds }}</HealthCheckIntervalSeconds>
<HealthCheckTimeoutSeconds>{{ target_group.healthcheck_timeout_seconds }}</HealthCheckTimeoutSeconds>
<HealthyThresholdCount>{{ target_group.healthy_threshold_count }}</HealthyThresholdCount>
<UnhealthyThresholdCount>{{ target_group.unhealthy_threshold_count }}</UnhealthyThresholdCount>
<Matcher>
<HttpCode>{{ target_group.matcher['HttpCode'] }}</HttpCode>
</Matcher>
<LoadBalancerArns>
{% for load_balancer_arn in target_group.load_balancer_arns %}
<member>{{ load_balancer_arn }}</member>
{% endfor %}
</LoadBalancerArns>
</member>
</TargetGroups>
</ModifyTargetGroupResult>
<ResponseMetadata>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</ModifyTargetGroupResponse>"""
MODIFY_LISTENER_TEMPLATE = """<ModifyListenerResponse xmlns="http://elasticloadbalancing.amazonaws.com/doc/2015-12-01/">
<ModifyListenerResult>
<Listeners>
<member>
<LoadBalancerArn>{{ listener.load_balancer_arn }}</LoadBalancerArn>
<Protocol>{{ listener.protocol }}</Protocol>
{% if listener.certificates %}
<Certificates>
{% for cert in listener.certificates %}
<member>
<CertificateArn>{{ cert }}</CertificateArn>
</member>
{% endfor %}
</Certificates>
{% endif %}
<Port>{{ listener.port }}</Port>
<SslPolicy>{{ listener.ssl_policy }}</SslPolicy>
<ListenerArn>{{ listener.arn }}</ListenerArn>
<DefaultActions>
{% for action in listener.default_actions %}
<member>
<Type>{{ action.type }}</Type>
<TargetGroupArn>{{ action.target_group_arn }}</TargetGroupArn>
</member>
{% endfor %}
</DefaultActions>
</member>
</Listeners>
</ModifyListenerResult>
<ResponseMetadata>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</ModifyListenerResponse>"""

View File

@ -1,10 +1,10 @@
from __future__ import unicode_literals
from .responses import ELBV2Response
from ..elb.urls import api_version_elb_backend
url_bases = [
"https?://elasticloadbalancing.(.+).amazonaws.com",
]
url_paths = {
'{0}/$': ELBV2Response.dispatch,
'{0}/$': api_version_elb_backend,
}

8
moto/elbv2/utils.py Normal file
View File

@ -0,0 +1,8 @@
def make_arn_for_load_balancer(account_id, name, region_name):
return "arn:aws:elasticloadbalancing:{}:{}:loadbalancer/{}/50dc6c495c0c9188".format(
region_name, account_id, name)
def make_arn_for_target_group(account_id, name, region_name):
return "arn:aws:elasticloadbalancing:{}:{}:targetgroup/{}/50dc6c495c0c9188".format(
region_name, account_id, name)

View File

@ -1,6 +1,7 @@
import os
import re
from moto.core.exceptions import JsonRESTError
from moto.core import BaseBackend, BaseModel
@ -50,6 +51,8 @@ class Rule(BaseModel):
class EventsBackend(BaseBackend):
ACCOUNT_ID = re.compile(r'^(\d{1,12}|\*)$')
STATEMENT_ID = re.compile(r'^[a-zA-Z0-9-_]{1,64}$')
def __init__(self):
self.rules = {}
@ -58,6 +61,8 @@ class EventsBackend(BaseBackend):
self.rules_order = []
self.next_tokens = {}
self.permissions = {}
def _get_rule_by_index(self, i):
return self.rules.get(self.rules_order[i])
@ -181,6 +186,17 @@ class EventsBackend(BaseBackend):
return False
def put_events(self, events):
num_events = len(events)
if num_events < 1:
raise JsonRESTError('ValidationError', 'Need at least 1 event')
elif num_events > 10:
raise JsonRESTError('ValidationError', 'Can only submit 10 events at once')
# We dont really need to store the events yet
return []
def remove_targets(self, name, ids):
rule = self.rules.get(name)
@ -193,5 +209,40 @@ class EventsBackend(BaseBackend):
def test_event_pattern(self):
raise NotImplementedError()
def put_permission(self, action, principal, statement_id):
if action is None or action != 'PutEvents':
raise JsonRESTError('InvalidParameterValue', 'Action must be PutEvents')
if principal is None or self.ACCOUNT_ID.match(principal) is None:
raise JsonRESTError('InvalidParameterValue', 'Principal must match ^(\d{1,12}|\*)$')
if statement_id is None or self.STATEMENT_ID.match(statement_id) is None:
raise JsonRESTError('InvalidParameterValue', 'StatementId must match ^[a-zA-Z0-9-_]{1,64}$')
self.permissions[statement_id] = {'action': action, 'principal': principal}
def remove_permission(self, statement_id):
try:
del self.permissions[statement_id]
except KeyError:
raise JsonRESTError('ResourceNotFoundException', 'StatementId not found')
def describe_event_bus(self):
arn = "arn:aws:events:us-east-1:000000000000:event-bus/default"
statements = []
for statement_id, data in self.permissions.items():
statements.append({
'Sid': statement_id,
'Effect': 'Allow',
'Principal': {'AWS': 'arn:aws:iam::{0}:root'.format(data['principal'])},
'Action': 'events:{0}'.format(data['action']),
'Resource': arn
})
return {
'Policy': {'Version': '2012-10-17', 'Statement': statements},
'Name': 'default',
'Arn': arn
}
events_backend = EventsBackend()

View File

@ -18,9 +18,17 @@ class EventsHandler(BaseResponse):
'RoleArn': rule.role_arn
}
def load_body(self):
decoded_body = self.body
return json.loads(decoded_body or '{}')
@property
def request_params(self):
if not hasattr(self, '_json_body'):
try:
self._json_body = json.loads(self.body)
except ValueError:
self._json_body = {}
return self._json_body
def _get_param(self, param, if_none=None):
return self.request_params.get(param, if_none)
def error(self, type_, message='', status=400):
headers = self.response_headers
@ -28,8 +36,7 @@ class EventsHandler(BaseResponse):
return json.dumps({'__type': type_, 'message': message}), headers,
def delete_rule(self):
body = self.load_body()
name = body.get('Name')
name = self._get_param('Name')
if not name:
return self.error('ValidationException', 'Parameter Name is required.')
@ -38,8 +45,7 @@ class EventsHandler(BaseResponse):
return '', self.response_headers
def describe_rule(self):
body = self.load_body()
name = body.get('Name')
name = self._get_param('Name')
if not name:
return self.error('ValidationException', 'Parameter Name is required.')
@ -53,8 +59,7 @@ class EventsHandler(BaseResponse):
return json.dumps(rule_dict), self.response_headers
def disable_rule(self):
body = self.load_body()
name = body.get('Name')
name = self._get_param('Name')
if not name:
return self.error('ValidationException', 'Parameter Name is required.')
@ -65,8 +70,7 @@ class EventsHandler(BaseResponse):
return '', self.response_headers
def enable_rule(self):
body = self.load_body()
name = body.get('Name')
name = self._get_param('Name')
if not name:
return self.error('ValidationException', 'Parameter Name is required.')
@ -80,10 +84,9 @@ class EventsHandler(BaseResponse):
pass
def list_rule_names_by_target(self):
body = self.load_body()
target_arn = body.get('TargetArn')
next_token = body.get('NextToken')
limit = body.get('Limit')
target_arn = self._get_param('TargetArn')
next_token = self._get_param('NextToken')
limit = self._get_param('Limit')
if not target_arn:
return self.error('ValidationException', 'Parameter TargetArn is required.')
@ -94,10 +97,9 @@ class EventsHandler(BaseResponse):
return json.dumps(rule_names), self.response_headers
def list_rules(self):
body = self.load_body()
prefix = body.get('NamePrefix')
next_token = body.get('NextToken')
limit = body.get('Limit')
prefix = self._get_param('NamePrefix')
next_token = self._get_param('NextToken')
limit = self._get_param('Limit')
rules = events_backend.list_rules(prefix, next_token, limit)
rules_obj = {'Rules': []}
@ -111,10 +113,9 @@ class EventsHandler(BaseResponse):
return json.dumps(rules_obj), self.response_headers
def list_targets_by_rule(self):
body = self.load_body()
rule_name = body.get('Rule')
next_token = body.get('NextToken')
limit = body.get('Limit')
rule_name = self._get_param('Rule')
next_token = self._get_param('NextToken')
limit = self._get_param('Limit')
if not rule_name:
return self.error('ValidationException', 'Parameter Rule is required.')
@ -128,13 +129,25 @@ class EventsHandler(BaseResponse):
return json.dumps(targets), self.response_headers
def put_events(self):
events = self._get_param('Entries')
failed_entries = events_backend.put_events(events)
if failed_entries:
return json.dumps({
'FailedEntryCount': len(failed_entries),
'Entries': failed_entries
})
return '', self.response_headers
def put_rule(self):
body = self.load_body()
name = body.get('Name')
event_pattern = body.get('EventPattern')
sched_exp = body.get('ScheduleExpression')
name = self._get_param('Name')
event_pattern = self._get_param('EventPattern')
sched_exp = self._get_param('ScheduleExpression')
state = self._get_param('State')
desc = self._get_param('Description')
role_arn = self._get_param('RoleArn')
if not name:
return self.error('ValidationException', 'Parameter Name is required.')
@ -156,17 +169,16 @@ class EventsHandler(BaseResponse):
name,
ScheduleExpression=sched_exp,
EventPattern=event_pattern,
State=body.get('State'),
Description=body.get('Description'),
RoleArn=body.get('RoleArn')
State=state,
Description=desc,
RoleArn=role_arn
)
return json.dumps({'RuleArn': rule_arn}), self.response_headers
def put_targets(self):
body = self.load_body()
rule_name = body.get('Rule')
targets = body.get('Targets')
rule_name = self._get_param('Rule')
targets = self._get_param('Targets')
if not rule_name:
return self.error('ValidationException', 'Parameter Rule is required.')
@ -180,9 +192,8 @@ class EventsHandler(BaseResponse):
return '', self.response_headers
def remove_targets(self):
body = self.load_body()
rule_name = body.get('Rule')
ids = body.get('Ids')
rule_name = self._get_param('Rule')
ids = self._get_param('Ids')
if not rule_name:
return self.error('ValidationException', 'Parameter Rule is required.')
@ -197,3 +208,22 @@ class EventsHandler(BaseResponse):
def test_event_pattern(self):
pass
def put_permission(self):
action = self._get_param('Action')
principal = self._get_param('Principal')
statement_id = self._get_param('StatementId')
events_backend.put_permission(action, principal, statement_id)
return ''
def remove_permission(self):
statement_id = self._get_param('StatementId')
events_backend.remove_permission(statement_id)
return ''
def describe_event_bus(self):
return json.dumps(events_backend.describe_event_bus())

View File

@ -528,6 +528,12 @@ class IAMBackend(BaseBackend):
return role
raise IAMNotFoundException("Role {0} not found".format(role_name))
def get_role_by_arn(self, arn):
for role in self.get_roles():
if role.arn == arn:
return role
raise IAMNotFoundException("Role {0} not found".format(arn))
def delete_role(self, role_name):
for role in self.get_roles():
if role.name == role_name:

View File

@ -1159,9 +1159,7 @@ CREATE_ACCESS_KEY_TEMPLATE = """<CreateAccessKeyResponse>
<UserName>{{ key.user_name }}</UserName>
<AccessKeyId>{{ key.access_key_id }}</AccessKeyId>
<Status>{{ key.status }}</Status>
<SecretAccessKey>
{{ key.secret_access_key }}
</SecretAccessKey>
<SecretAccessKey>{{ key.secret_access_key }}</SecretAccessKey>
</AccessKey>
</CreateAccessKeyResult>
<ResponseMetadata>

6
moto/iot/__init__.py Normal file
View File

@ -0,0 +1,6 @@
from __future__ import unicode_literals
from .models import iot_backends
from ..core.models import base_decorator
iot_backend = iot_backends['us-east-1']
mock_iot = base_decorator(iot_backends)

24
moto/iot/exceptions.py Normal file
View File

@ -0,0 +1,24 @@
from __future__ import unicode_literals
from moto.core.exceptions import JsonRESTError
class IoTClientError(JsonRESTError):
code = 400
class ResourceNotFoundException(IoTClientError):
def __init__(self):
self.code = 404
super(ResourceNotFoundException, self).__init__(
"ResourceNotFoundException",
"The specified resource does not exist"
)
class InvalidRequestException(IoTClientError):
def __init__(self):
self.code = 400
super(InvalidRequestException, self).__init__(
"InvalidRequestException",
"The request is not valid."
)

364
moto/iot/models.py Normal file
View File

@ -0,0 +1,364 @@
from __future__ import unicode_literals
import time
import boto3
import string
import random
import hashlib
import uuid
from moto.core import BaseBackend, BaseModel
from collections import OrderedDict
from .exceptions import (
ResourceNotFoundException,
InvalidRequestException
)
class FakeThing(BaseModel):
def __init__(self, thing_name, thing_type, attributes, region_name):
self.region_name = region_name
self.thing_name = thing_name
self.thing_type = thing_type
self.attributes = attributes
self.arn = 'arn:aws:iot:%s:1:thing/%s' % (self.region_name, thing_name)
self.version = 1
# TODO: we need to handle 'version'?
# for iot-data
self.thing_shadow = None
def to_dict(self, include_default_client_id=False):
obj = {
'thingName': self.thing_name,
'attributes': self.attributes,
'version': self.version
}
if self.thing_type:
obj['thingTypeName'] = self.thing_type.thing_type_name
if include_default_client_id:
obj['defaultClientId'] = self.thing_name
return obj
class FakeThingType(BaseModel):
def __init__(self, thing_type_name, thing_type_properties, region_name):
self.region_name = region_name
self.thing_type_name = thing_type_name
self.thing_type_properties = thing_type_properties
t = time.time()
self.metadata = {
'deprecated': False,
'creationData': int(t * 1000) / 1000.0
}
self.arn = 'arn:aws:iot:%s:1:thingtype/%s' % (self.region_name, thing_type_name)
def to_dict(self):
return {
'thingTypeName': self.thing_type_name,
'thingTypeProperties': self.thing_type_properties,
'thingTypeMetadata': self.metadata
}
class FakeCertificate(BaseModel):
def __init__(self, certificate_pem, status, region_name):
m = hashlib.sha256()
m.update(str(uuid.uuid4()).encode('utf-8'))
self.certificate_id = m.hexdigest()
self.arn = 'arn:aws:iot:%s:1:cert/%s' % (region_name, self.certificate_id)
self.certificate_pem = certificate_pem
self.status = status
# TODO: must adjust
self.owner = '1'
self.transfer_data = {}
self.creation_date = time.time()
self.last_modified_date = self.creation_date
self.ca_certificate_id = None
def to_dict(self):
return {
'certificateArn': self.arn,
'certificateId': self.certificate_id,
'status': self.status,
'creationDate': self.creation_date
}
def to_description_dict(self):
"""
You might need keys below in some situation
- caCertificateId
- previousOwnedBy
"""
return {
'certificateArn': self.arn,
'certificateId': self.certificate_id,
'status': self.status,
'certificatePem': self.certificate_pem,
'ownedBy': self.owner,
'creationDate': self.creation_date,
'lastModifiedDate': self.last_modified_date,
'transferData': self.transfer_data
}
class FakePolicy(BaseModel):
def __init__(self, name, document, region_name):
self.name = name
self.document = document
self.arn = 'arn:aws:iot:%s:1:policy/%s' % (region_name, name)
self.version = '1' # TODO: handle version
def to_get_dict(self):
return {
'policyName': self.name,
'policyArn': self.arn,
'policyDocument': self.document,
'defaultVersionId': self.version
}
def to_dict_at_creation(self):
return {
'policyName': self.name,
'policyArn': self.arn,
'policyDocument': self.document,
'policyVersionId': self.version
}
def to_dict(self):
return {
'policyName': self.name,
'policyArn': self.arn,
}
class IoTBackend(BaseBackend):
def __init__(self, region_name=None):
super(IoTBackend, self).__init__()
self.region_name = region_name
self.things = OrderedDict()
self.thing_types = OrderedDict()
self.certificates = OrderedDict()
self.policies = OrderedDict()
self.principal_policies = OrderedDict()
self.principal_things = OrderedDict()
def reset(self):
region_name = self.region_name
self.__dict__ = {}
self.__init__(region_name)
def create_thing(self, thing_name, thing_type_name, attribute_payload):
thing_types = self.list_thing_types()
thing_type = None
if thing_type_name:
filtered_thing_types = [_ for _ in thing_types if _.thing_type_name == thing_type_name]
if len(filtered_thing_types) == 0:
raise ResourceNotFoundException()
thing_type = filtered_thing_types[0]
if attribute_payload is None:
attributes = {}
elif 'attributes' not in attribute_payload:
attributes = {}
else:
attributes = attribute_payload['attributes']
thing = FakeThing(thing_name, thing_type, attributes, self.region_name)
self.things[thing.arn] = thing
return thing.thing_name, thing.arn
def create_thing_type(self, thing_type_name, thing_type_properties):
if thing_type_properties is None:
thing_type_properties = {}
thing_type = FakeThingType(thing_type_name, thing_type_properties, self.region_name)
self.thing_types[thing_type.arn] = thing_type
return thing_type.thing_type_name, thing_type.arn
def list_thing_types(self, thing_type_name=None):
if thing_type_name:
# It's wierd but thing_type_name is filterd by forward match, not complete match
return [_ for _ in self.thing_types.values() if _.thing_type_name.startswith(thing_type_name)]
thing_types = self.thing_types.values()
return thing_types
def list_things(self, attribute_name, attribute_value, thing_type_name):
# TODO: filter by attributess or thing_type
things = self.things.values()
return things
def describe_thing(self, thing_name):
things = [_ for _ in self.things.values() if _.thing_name == thing_name]
if len(things) == 0:
raise ResourceNotFoundException()
return things[0]
def describe_thing_type(self, thing_type_name):
thing_types = [_ for _ in self.thing_types.values() if _.thing_type_name == thing_type_name]
if len(thing_types) == 0:
raise ResourceNotFoundException()
return thing_types[0]
def delete_thing(self, thing_name, expected_version):
# TODO: handle expected_version
# can raise ResourceNotFoundError
thing = self.describe_thing(thing_name)
del self.things[thing.arn]
def delete_thing_type(self, thing_type_name):
# can raise ResourceNotFoundError
thing_type = self.describe_thing_type(thing_type_name)
del self.thing_types[thing_type.arn]
def update_thing(self, thing_name, thing_type_name, attribute_payload, expected_version, remove_thing_type):
# if attributes payload = {}, nothing
thing = self.describe_thing(thing_name)
thing_type = None
if remove_thing_type and thing_type_name:
raise InvalidRequestException()
# thing_type
if thing_type_name:
thing_types = self.list_thing_types()
filtered_thing_types = [_ for _ in thing_types if _.thing_type_name == thing_type_name]
if len(filtered_thing_types) == 0:
raise ResourceNotFoundException()
thing_type = filtered_thing_types[0]
thing.thing_type = thing_type
if remove_thing_type:
thing.thing_type = None
# attribute
if attribute_payload is not None and 'attributes' in attribute_payload:
do_merge = attribute_payload.get('merge', False)
attributes = attribute_payload['attributes']
if not do_merge:
thing.attributes = attributes
else:
thing.attributes.update(attributes)
def _random_string(self):
n = 20
random_str = ''.join([random.choice(string.ascii_letters + string.digits) for i in range(n)])
return random_str
def create_keys_and_certificate(self, set_as_active):
# implement here
# caCertificate can be blank
key_pair = {
'PublicKey': self._random_string(),
'PrivateKey': self._random_string()
}
certificate_pem = self._random_string()
status = 'ACTIVE' if set_as_active else 'INACTIVE'
certificate = FakeCertificate(certificate_pem, status, self.region_name)
self.certificates[certificate.certificate_id] = certificate
return certificate, key_pair
def delete_certificate(self, certificate_id):
self.describe_certificate(certificate_id)
del self.certificates[certificate_id]
def describe_certificate(self, certificate_id):
certs = [_ for _ in self.certificates.values() if _.certificate_id == certificate_id]
if len(certs) == 0:
raise ResourceNotFoundException()
return certs[0]
def list_certificates(self):
return self.certificates.values()
def update_certificate(self, certificate_id, new_status):
cert = self.describe_certificate(certificate_id)
# TODO: validate new_status
cert.status = new_status
def create_policy(self, policy_name, policy_document):
policy = FakePolicy(policy_name, policy_document, self.region_name)
self.policies[policy.name] = policy
return policy
def list_policies(self):
policies = self.policies.values()
return policies
def get_policy(self, policy_name):
policies = [_ for _ in self.policies.values() if _.name == policy_name]
if len(policies) == 0:
raise ResourceNotFoundException()
return policies[0]
def delete_policy(self, policy_name):
policy = self.get_policy(policy_name)
del self.policies[policy.name]
def _get_principal(self, principal_arn):
"""
raise ResourceNotFoundException
"""
if ':cert/' in principal_arn:
certs = [_ for _ in self.certificates.values() if _.arn == principal_arn]
if len(certs) == 0:
raise ResourceNotFoundException()
principal = certs[0]
return principal
else:
# TODO: search for cognito_ids
pass
raise ResourceNotFoundException()
def attach_principal_policy(self, policy_name, principal_arn):
principal = self._get_principal(principal_arn)
policy = self.get_policy(policy_name)
k = (principal_arn, policy_name)
if k in self.principal_policies:
return
self.principal_policies[k] = (principal, policy)
def detach_principal_policy(self, policy_name, principal_arn):
# this may raises ResourceNotFoundException
self._get_principal(principal_arn)
self.get_policy(policy_name)
k = (principal_arn, policy_name)
if k not in self.principal_policies:
raise ResourceNotFoundException()
del self.principal_policies[k]
def list_principal_policies(self, principal_arn):
policies = [v[1] for k, v in self.principal_policies.items() if k[0] == principal_arn]
return policies
def list_policy_principals(self, policy_name):
principals = [k[0] for k, v in self.principal_policies.items() if k[1] == policy_name]
return principals
def attach_thing_principal(self, thing_name, principal_arn):
principal = self._get_principal(principal_arn)
thing = self.describe_thing(thing_name)
k = (principal_arn, thing_name)
if k in self.principal_things:
return
self.principal_things[k] = (principal, thing)
def detach_thing_principal(self, thing_name, principal_arn):
# this may raises ResourceNotFoundException
self._get_principal(principal_arn)
self.describe_thing(thing_name)
k = (principal_arn, thing_name)
if k not in self.principal_things:
raise ResourceNotFoundException()
del self.principal_things[k]
def list_principal_things(self, principal_arn):
thing_names = [k[0] for k, v in self.principal_things.items() if k[0] == principal_arn]
return thing_names
def list_thing_principals(self, thing_name):
principals = [k[0] for k, v in self.principal_things.items() if k[1] == thing_name]
return principals
available_regions = boto3.session.Session().get_available_regions("iot")
iot_backends = {region: IoTBackend(region) for region in available_regions}

258
moto/iot/responses.py Normal file
View File

@ -0,0 +1,258 @@
from __future__ import unicode_literals
from moto.core.responses import BaseResponse
from .models import iot_backends
import json
class IoTResponse(BaseResponse):
SERVICE_NAME = 'iot'
@property
def iot_backend(self):
return iot_backends[self.region]
def create_thing(self):
thing_name = self._get_param("thingName")
thing_type_name = self._get_param("thingTypeName")
attribute_payload = self._get_param("attributePayload")
thing_name, thing_arn = self.iot_backend.create_thing(
thing_name=thing_name,
thing_type_name=thing_type_name,
attribute_payload=attribute_payload,
)
return json.dumps(dict(thingName=thing_name, thingArn=thing_arn))
def create_thing_type(self):
thing_type_name = self._get_param("thingTypeName")
thing_type_properties = self._get_param("thingTypeProperties")
thing_type_name, thing_type_arn = self.iot_backend.create_thing_type(
thing_type_name=thing_type_name,
thing_type_properties=thing_type_properties,
)
return json.dumps(dict(thingTypeName=thing_type_name, thingTypeArn=thing_type_arn))
def list_thing_types(self):
# previous_next_token = self._get_param("nextToken")
# max_results = self._get_int_param("maxResults")
thing_type_name = self._get_param("thingTypeName")
thing_types = self.iot_backend.list_thing_types(
thing_type_name=thing_type_name
)
# TODO: support next_token and max_results
next_token = None
return json.dumps(dict(thingTypes=[_.to_dict() for _ in thing_types], nextToken=next_token))
def list_things(self):
# previous_next_token = self._get_param("nextToken")
# max_results = self._get_int_param("maxResults")
attribute_name = self._get_param("attributeName")
attribute_value = self._get_param("attributeValue")
thing_type_name = self._get_param("thingTypeName")
things = self.iot_backend.list_things(
attribute_name=attribute_name,
attribute_value=attribute_value,
thing_type_name=thing_type_name,
)
# TODO: support next_token and max_results
next_token = None
return json.dumps(dict(things=[_.to_dict() for _ in things], nextToken=next_token))
def describe_thing(self):
thing_name = self._get_param("thingName")
thing = self.iot_backend.describe_thing(
thing_name=thing_name,
)
print(thing.to_dict(include_default_client_id=True))
return json.dumps(thing.to_dict(include_default_client_id=True))
def describe_thing_type(self):
thing_type_name = self._get_param("thingTypeName")
thing_type = self.iot_backend.describe_thing_type(
thing_type_name=thing_type_name,
)
return json.dumps(thing_type.to_dict())
def delete_thing(self):
thing_name = self._get_param("thingName")
expected_version = self._get_param("expectedVersion")
self.iot_backend.delete_thing(
thing_name=thing_name,
expected_version=expected_version,
)
return json.dumps(dict())
def delete_thing_type(self):
thing_type_name = self._get_param("thingTypeName")
self.iot_backend.delete_thing_type(
thing_type_name=thing_type_name,
)
return json.dumps(dict())
def update_thing(self):
thing_name = self._get_param("thingName")
thing_type_name = self._get_param("thingTypeName")
attribute_payload = self._get_param("attributePayload")
expected_version = self._get_param("expectedVersion")
remove_thing_type = self._get_param("removeThingType")
self.iot_backend.update_thing(
thing_name=thing_name,
thing_type_name=thing_type_name,
attribute_payload=attribute_payload,
expected_version=expected_version,
remove_thing_type=remove_thing_type,
)
return json.dumps(dict())
def create_keys_and_certificate(self):
set_as_active = self._get_param("setAsActive")
cert, key_pair = self.iot_backend.create_keys_and_certificate(
set_as_active=set_as_active,
)
return json.dumps(dict(
certificateArn=cert.arn,
certificateId=cert.certificate_id,
certificatePem=cert.certificate_pem,
keyPair=key_pair
))
def delete_certificate(self):
certificate_id = self._get_param("certificateId")
self.iot_backend.delete_certificate(
certificate_id=certificate_id,
)
return json.dumps(dict())
def describe_certificate(self):
certificate_id = self._get_param("certificateId")
certificate = self.iot_backend.describe_certificate(
certificate_id=certificate_id,
)
return json.dumps(dict(certificateDescription=certificate.to_description_dict()))
def list_certificates(self):
# page_size = self._get_int_param("pageSize")
# marker = self._get_param("marker")
# ascending_order = self._get_param("ascendingOrder")
certificates = self.iot_backend.list_certificates()
# TODO: handle pagination
return json.dumps(dict(certificates=[_.to_dict() for _ in certificates]))
def update_certificate(self):
certificate_id = self._get_param("certificateId")
new_status = self._get_param("newStatus")
self.iot_backend.update_certificate(
certificate_id=certificate_id,
new_status=new_status,
)
return json.dumps(dict())
def create_policy(self):
policy_name = self._get_param("policyName")
policy_document = self._get_param("policyDocument")
policy = self.iot_backend.create_policy(
policy_name=policy_name,
policy_document=policy_document,
)
return json.dumps(policy.to_dict_at_creation())
def list_policies(self):
# marker = self._get_param("marker")
# page_size = self._get_int_param("pageSize")
# ascending_order = self._get_param("ascendingOrder")
policies = self.iot_backend.list_policies()
# TODO: handle pagination
return json.dumps(dict(policies=[_.to_dict() for _ in policies]))
def get_policy(self):
policy_name = self._get_param("policyName")
policy = self.iot_backend.get_policy(
policy_name=policy_name,
)
return json.dumps(policy.to_get_dict())
def delete_policy(self):
policy_name = self._get_param("policyName")
self.iot_backend.delete_policy(
policy_name=policy_name,
)
return json.dumps(dict())
def attach_principal_policy(self):
policy_name = self._get_param("policyName")
principal = self.headers.get('x-amzn-iot-principal')
self.iot_backend.attach_principal_policy(
policy_name=policy_name,
principal_arn=principal,
)
return json.dumps(dict())
def detach_principal_policy(self):
policy_name = self._get_param("policyName")
principal = self.headers.get('x-amzn-iot-principal')
self.iot_backend.detach_principal_policy(
policy_name=policy_name,
principal_arn=principal,
)
return json.dumps(dict())
def list_principal_policies(self):
principal = self.headers.get('x-amzn-iot-principal')
# marker = self._get_param("marker")
# page_size = self._get_int_param("pageSize")
# ascending_order = self._get_param("ascendingOrder")
policies = self.iot_backend.list_principal_policies(
principal_arn=principal
)
# TODO: handle pagination
next_marker = None
return json.dumps(dict(policies=[_.to_dict() for _ in policies], nextMarker=next_marker))
def list_policy_principals(self):
policy_name = self.headers.get('x-amzn-iot-policy')
# marker = self._get_param("marker")
# page_size = self._get_int_param("pageSize")
# ascending_order = self._get_param("ascendingOrder")
principals = self.iot_backend.list_policy_principals(
policy_name=policy_name,
)
# TODO: handle pagination
next_marker = None
return json.dumps(dict(principals=principals, nextMarker=next_marker))
def attach_thing_principal(self):
thing_name = self._get_param("thingName")
principal = self.headers.get('x-amzn-principal')
self.iot_backend.attach_thing_principal(
thing_name=thing_name,
principal_arn=principal,
)
return json.dumps(dict())
def detach_thing_principal(self):
thing_name = self._get_param("thingName")
principal = self.headers.get('x-amzn-principal')
self.iot_backend.detach_thing_principal(
thing_name=thing_name,
principal_arn=principal,
)
return json.dumps(dict())
def list_principal_things(self):
next_token = self._get_param("nextToken")
# max_results = self._get_int_param("maxResults")
principal = self.headers.get('x-amzn-principal')
things = self.iot_backend.list_principal_things(
principal_arn=principal,
)
# TODO: handle pagination
next_token = None
return json.dumps(dict(things=things, nextToken=next_token))
def list_thing_principals(self):
thing_name = self._get_param("thingName")
principals = self.iot_backend.list_thing_principals(
thing_name=thing_name,
)
return json.dumps(dict(principals=principals))

14
moto/iot/urls.py Normal file
View File

@ -0,0 +1,14 @@
from __future__ import unicode_literals
from .responses import IoTResponse
url_bases = [
"https?://iot.(.+).amazonaws.com",
]
response = IoTResponse()
url_paths = {
'{0}/.*$': response.dispatch,
}

6
moto/iotdata/__init__.py Normal file
View File

@ -0,0 +1,6 @@
from __future__ import unicode_literals
from .models import iotdata_backends
from ..core.models import base_decorator
iotdata_backend = iotdata_backends['us-east-1']
mock_iotdata = base_decorator(iotdata_backends)

View File

@ -0,0 +1,23 @@
from __future__ import unicode_literals
from moto.core.exceptions import JsonRESTError
class IoTDataPlaneClientError(JsonRESTError):
code = 400
class ResourceNotFoundException(IoTDataPlaneClientError):
def __init__(self):
self.code = 404
super(ResourceNotFoundException, self).__init__(
"ResourceNotFoundException",
"The specified resource does not exist"
)
class InvalidRequestException(IoTDataPlaneClientError):
def __init__(self, message):
self.code = 400
super(InvalidRequestException, self).__init__(
"InvalidRequestException", message
)

193
moto/iotdata/models.py Normal file
View File

@ -0,0 +1,193 @@
from __future__ import unicode_literals
import json
import time
import boto3
import jsondiff
from moto.core import BaseBackend, BaseModel
from moto.iot import iot_backends
from .exceptions import (
ResourceNotFoundException,
InvalidRequestException
)
class FakeShadow(BaseModel):
"""See the specification:
http://docs.aws.amazon.com/iot/latest/developerguide/thing-shadow-document-syntax.html
"""
def __init__(self, desired, reported, requested_payload, version, deleted=False):
self.desired = desired
self.reported = reported
self.requested_payload = requested_payload
self.version = version
self.timestamp = int(time.time())
self.deleted = deleted
self.metadata_desired = self._create_metadata_from_state(self.desired, self.timestamp)
self.metadata_reported = self._create_metadata_from_state(self.reported, self.timestamp)
@classmethod
def create_from_previous_version(cls, previous_shadow, payload):
"""
set None to payload when you want to delete shadow
"""
version, previous_payload = (previous_shadow.version + 1, previous_shadow.to_dict(include_delta=False)) if previous_shadow else (1, {})
if payload is None:
# if given payload is None, delete existing payload
# this means the request was delete_thing_shadow
shadow = FakeShadow(None, None, None, version, deleted=True)
return shadow
# we can make sure that payload has 'state' key
desired = payload['state'].get(
'desired',
previous_payload.get('state', {}).get('desired', None)
)
reported = payload['state'].get(
'reported',
previous_payload.get('state', {}).get('reported', None)
)
shadow = FakeShadow(desired, reported, payload, version)
return shadow
@classmethod
def parse_payload(cls, desired, reported):
if desired is None:
delta = reported
elif reported is None:
delta = desired
else:
delta = jsondiff.diff(desired, reported)
return delta
def _create_metadata_from_state(self, state, ts):
"""
state must be disired or reported stype dict object
replces primitive type with {"timestamp": ts} in dict
"""
if state is None:
return None
def _f(elem, ts):
if isinstance(elem, dict):
return {_: _f(elem[_], ts) for _ in elem.keys()}
if isinstance(elem, list):
return [_f(_, ts) for _ in elem]
return {"timestamp": ts}
return _f(state, ts)
def to_response_dict(self):
desired = self.requested_payload['state'].get('desired', None)
reported = self.requested_payload['state'].get('reported', None)
payload = {}
if desired is not None:
payload['desired'] = desired
if reported is not None:
payload['reported'] = reported
metadata = {}
if desired is not None:
metadata['desired'] = self._create_metadata_from_state(desired, self.timestamp)
if reported is not None:
metadata['reported'] = self._create_metadata_from_state(reported, self.timestamp)
return {
'state': payload,
'metadata': metadata,
'timestamp': self.timestamp,
'version': self.version
}
def to_dict(self, include_delta=True):
"""returning nothing except for just top-level keys for now.
"""
if self.deleted:
return {
'timestamp': self.timestamp,
'version': self.version
}
delta = self.parse_payload(self.desired, self.reported)
payload = {}
if self.desired is not None:
payload['desired'] = self.desired
if self.reported is not None:
payload['reported'] = self.reported
if include_delta and (delta is not None and len(delta.keys()) != 0):
payload['delta'] = delta
metadata = {}
if self.metadata_desired is not None:
metadata['desired'] = self.metadata_desired
if self.metadata_reported is not None:
metadata['reported'] = self.metadata_reported
return {
'state': payload,
'metadata': metadata,
'timestamp': self.timestamp,
'version': self.version
}
class IoTDataPlaneBackend(BaseBackend):
def __init__(self, region_name=None):
super(IoTDataPlaneBackend, self).__init__()
self.region_name = region_name
def reset(self):
region_name = self.region_name
self.__dict__ = {}
self.__init__(region_name)
def update_thing_shadow(self, thing_name, payload):
"""
spec of payload:
- need node `state`
- state node must be an Object
- State contains an invalid node: 'foo'
"""
thing = iot_backends[self.region_name].describe_thing(thing_name)
# validate
try:
payload = json.loads(payload)
except ValueError:
raise InvalidRequestException('invalid json')
if 'state' not in payload:
raise InvalidRequestException('need node `state`')
if not isinstance(payload['state'], dict):
raise InvalidRequestException('state node must be an Object')
if any(_ for _ in payload['state'].keys() if _ not in ['desired', 'reported']):
raise InvalidRequestException('State contains an invalid node')
new_shadow = FakeShadow.create_from_previous_version(thing.thing_shadow, payload)
thing.thing_shadow = new_shadow
return thing.thing_shadow
def get_thing_shadow(self, thing_name):
thing = iot_backends[self.region_name].describe_thing(thing_name)
if thing.thing_shadow is None or thing.thing_shadow.deleted:
raise ResourceNotFoundException()
return thing.thing_shadow
def delete_thing_shadow(self, thing_name):
"""after deleting, get_thing_shadow will raise ResourceNotFound.
But version of the shadow keep increasing...
"""
thing = iot_backends[self.region_name].describe_thing(thing_name)
if thing.thing_shadow is None:
raise ResourceNotFoundException()
payload = None
new_shadow = FakeShadow.create_from_previous_version(thing.thing_shadow, payload)
thing.thing_shadow = new_shadow
return thing.thing_shadow
def publish(self, topic, qos, payload):
# do nothing because client won't know about the result
return None
available_regions = boto3.session.Session().get_available_regions("iot-data")
iotdata_backends = {region: IoTDataPlaneBackend(region) for region in available_regions}

46
moto/iotdata/responses.py Normal file
View File

@ -0,0 +1,46 @@
from __future__ import unicode_literals
from moto.core.responses import BaseResponse
from .models import iotdata_backends
import json
class IoTDataPlaneResponse(BaseResponse):
SERVICE_NAME = 'iot-data'
@property
def iotdata_backend(self):
return iotdata_backends[self.region]
def update_thing_shadow(self):
thing_name = self._get_param("thingName")
payload = self.body
payload = self.iotdata_backend.update_thing_shadow(
thing_name=thing_name,
payload=payload,
)
return json.dumps(payload.to_response_dict())
def get_thing_shadow(self):
thing_name = self._get_param("thingName")
payload = self.iotdata_backend.get_thing_shadow(
thing_name=thing_name,
)
return json.dumps(payload.to_dict())
def delete_thing_shadow(self):
thing_name = self._get_param("thingName")
payload = self.iotdata_backend.delete_thing_shadow(
thing_name=thing_name,
)
return json.dumps(payload.to_dict())
def publish(self):
topic = self._get_param("topic")
qos = self._get_int_param("qos")
payload = self._get_param("payload")
self.iotdata_backend.publish(
topic=topic,
qos=qos,
payload=payload,
)
return json.dumps(dict())

14
moto/iotdata/urls.py Normal file
View File

@ -0,0 +1,14 @@
from __future__ import unicode_literals
from .responses import IoTDataPlaneResponse
url_bases = [
"https?://data.iot.(.+).amazonaws.com",
]
response = IoTDataPlaneResponse()
url_paths = {
'{0}/.*$': response.dispatch,
}

View File

@ -103,8 +103,10 @@ class KmsBackend(BaseBackend):
self.key_to_aliases[target_key_id].add(alias_name)
def delete_alias(self, alias_name):
"""Delete the alias."""
for aliases in self.key_to_aliases.values():
aliases.remove(alias_name)
if alias_name in aliases:
aliases.remove(alias_name)
def get_all_aliases(self):
return self.key_to_aliases

View File

@ -22,6 +22,13 @@ class LogEvent:
"timestamp": self.timestamp
}
def to_response_dict(self):
return {
"ingestionTime": self.ingestionTime,
"message": self.message,
"timestamp": self.timestamp
}
class LogStream:
_log_ids = 0
@ -41,7 +48,14 @@ class LogStream:
self.__class__._log_ids += 1
def _update(self):
self.firstEventTimestamp = min([x.timestamp for x in self.events])
self.lastEventTimestamp = max([x.timestamp for x in self.events])
def to_describe_dict(self):
# Compute start and end times
self._update()
return {
"arn": self.arn,
"creationTime": self.creationTime,
@ -79,7 +93,7 @@ class LogStream:
if next_token is None:
next_token = 0
events_page = events[next_token: next_token + limit]
events_page = [event.to_response_dict() for event in events[next_token: next_token + limit]]
next_token += limit
if next_token >= len(self.events):
next_token = None
@ -120,17 +134,17 @@ class LogGroup:
del self.streams[log_stream_name]
def describe_log_streams(self, descending, limit, log_group_name, log_stream_name_prefix, next_token, order_by):
log_streams = [stream.to_describe_dict() for name, stream in self.streams.items() if name.startswith(log_stream_name_prefix)]
log_streams = [(name, stream.to_describe_dict()) for name, stream in self.streams.items() if name.startswith(log_stream_name_prefix)]
def sorter(stream):
return stream.name if order_by == 'logStreamName' else stream.lastEventTimestamp
def sorter(item):
return item[0] if order_by == 'logStreamName' else item[1]['lastEventTimestamp']
if next_token is None:
next_token = 0
log_streams = sorted(log_streams, key=sorter, reverse=descending)
new_token = next_token + limit
log_streams_page = log_streams[next_token: new_token]
log_streams_page = [x[1] for x in log_streams[next_token: new_token]]
if new_token >= len(log_streams):
new_token = None

View File

@ -47,7 +47,7 @@ class LogsResponse(BaseResponse):
def describe_log_streams(self):
log_group_name = self._get_param('logGroupName')
log_stream_name_prefix = self._get_param('logStreamNamePrefix')
log_stream_name_prefix = self._get_param('logStreamNamePrefix', '')
descending = self._get_param('descending', False)
limit = self._get_param('limit', 50)
assert limit <= 50
@ -83,13 +83,13 @@ class LogsResponse(BaseResponse):
limit = self._get_param('limit', 10000)
assert limit <= 10000
next_token = self._get_param('nextToken')
start_from_head = self._get_param('startFromHead')
start_from_head = self._get_param('startFromHead', False)
events, next_backward_token, next_foward_token = \
self.logs_backend.get_log_events(log_group_name, log_stream_name, start_time, end_time, limit, next_token, start_from_head)
return json.dumps({
"events": events,
"events": [ob.__dict__ for ob in events],
"nextBackwardToken": next_backward_token,
"nextForwardToken": next_foward_token
})

View File

@ -107,6 +107,9 @@ class RDSResponse(BaseResponse):
def modify_db_instance(self):
db_instance_identifier = self._get_param('DBInstanceIdentifier')
db_kwargs = self._get_db_kwargs()
new_db_instance_identifier = self._get_param('NewDBInstanceIdentifier')
if new_db_instance_identifier:
db_kwargs['new_db_instance_identifier'] = new_db_instance_identifier
database = self.backend.modify_database(
db_instance_identifier, db_kwargs)
template = self.response_template(MODIFY_DATABASE_TEMPLATE)

View File

@ -704,7 +704,8 @@ class RDS2Backend(BaseBackend):
if self.arn_regex.match(source_database_id):
db_kwargs['region'] = self.region
replica = copy.deepcopy(primary)
# Shouldn't really copy here as the instance is duplicated. RDS replicas have different instances.
replica = copy.copy(primary)
replica.update(db_kwargs)
replica.set_as_replica()
self.databases[database_id] = replica
@ -735,6 +736,10 @@ class RDS2Backend(BaseBackend):
def modify_database(self, db_instance_identifier, db_kwargs):
database = self.describe_databases(db_instance_identifier)[0]
if 'new_db_instance_identifier' in db_kwargs:
del self.databases[db_instance_identifier]
db_instance_identifier = db_kwargs['db_instance_identifier'] = db_kwargs.pop('new_db_instance_identifier')
self.databases[db_instance_identifier] = database
database.update(db_kwargs)
return database
@ -752,13 +757,13 @@ class RDS2Backend(BaseBackend):
raise InvalidDBInstanceStateError(db_instance_identifier, 'stop')
if db_snapshot_identifier:
self.create_snapshot(db_instance_identifier, db_snapshot_identifier)
database.status = 'shutdown'
database.status = 'stopped'
return database
def start_database(self, db_instance_identifier):
database = self.describe_databases(db_instance_identifier)[0]
# todo: bunch of different error messages to be generated from this api call
if database.status != 'shutdown':
if database.status != 'stopped':
raise InvalidDBInstanceStateError(db_instance_identifier, 'start')
database.status = 'available'
return database

View File

@ -135,6 +135,9 @@ class RDS2Response(BaseResponse):
def modify_db_instance(self):
db_instance_identifier = self._get_param('DBInstanceIdentifier')
db_kwargs = self._get_db_kwargs()
new_db_instance_identifier = self._get_param('NewDBInstanceIdentifier')
if new_db_instance_identifier:
db_kwargs['new_db_instance_identifier'] = new_db_instance_identifier
database = self.backend.modify_database(
db_instance_identifier, db_kwargs)
template = self.response_template(MODIFY_DATABASE_TEMPLATE)

View File

@ -0,0 +1,6 @@
from __future__ import unicode_literals
from .models import resourcegroupstaggingapi_backends
from ..core.models import base_decorator
resourcegroupstaggingapi_backend = resourcegroupstaggingapi_backends['us-east-1']
mock_resourcegroupstaggingapi = base_decorator(resourcegroupstaggingapi_backends)

View File

@ -0,0 +1,511 @@
from __future__ import unicode_literals
import uuid
import boto3
import six
from moto.core import BaseBackend
from moto.core.exceptions import RESTError
from moto.s3 import s3_backends
from moto.ec2 import ec2_backends
from moto.elb import elb_backends
from moto.elbv2 import elbv2_backends
from moto.kinesis import kinesis_backends
from moto.rds2 import rds2_backends
from moto.glacier import glacier_backends
from moto.redshift import redshift_backends
from moto.emr import emr_backends
# Left: EC2 ElastiCache RDS ELB CloudFront WorkSpaces Lambda EMR Glacier Kinesis Redshift Route53
# StorageGateway DynamoDB MachineLearning ACM DirectConnect DirectoryService CloudHSM
# Inspector Elasticsearch
class ResourceGroupsTaggingAPIBackend(BaseBackend):
def __init__(self, region_name=None):
super(ResourceGroupsTaggingAPIBackend, self).__init__()
self.region_name = region_name
self._pages = {}
# Like 'someuuid': {'gen': <generator>, 'misc': None}
# Misc is there for peeking from a generator and it cant
# fit in the current request. As we only store generators
# theres not really any point to clean up
def reset(self):
region_name = self.region_name
self.__dict__ = {}
self.__init__(region_name)
@property
def s3_backend(self):
"""
:rtype: moto.s3.models.S3Backend
"""
return s3_backends['global']
@property
def ec2_backend(self):
"""
:rtype: moto.ec2.models.EC2Backend
"""
return ec2_backends[self.region_name]
@property
def elb_backend(self):
"""
:rtype: moto.elb.models.ELBBackend
"""
return elb_backends[self.region_name]
@property
def elbv2_backend(self):
"""
:rtype: moto.elbv2.models.ELBv2Backend
"""
return elbv2_backends[self.region_name]
@property
def kinesis_backend(self):
"""
:rtype: moto.kinesis.models.KinesisBackend
"""
return kinesis_backends[self.region_name]
@property
def rds_backend(self):
"""
:rtype: moto.rds2.models.RDS2Backend
"""
return rds2_backends[self.region_name]
@property
def glacier_backend(self):
"""
:rtype: moto.glacier.models.GlacierBackend
"""
return glacier_backends[self.region_name]
@property
def emr_backend(self):
"""
:rtype: moto.emr.models.ElasticMapReduceBackend
"""
return emr_backends[self.region_name]
@property
def redshift_backend(self):
"""
:rtype: moto.redshift.models.RedshiftBackend
"""
return redshift_backends[self.region_name]
def _get_resources_generator(self, tag_filters=None, resource_type_filters=None):
# Look at
# https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html
# TODO move these to their respective backends
filters = [lambda t, v: True]
for tag_filter_dict in tag_filters:
values = tag_filter_dict.get('Values', [])
if len(values) == 0:
# Check key matches
filters.append(lambda t, v: t == tag_filter_dict['Key'])
elif len(values) == 1:
# Check its exactly the same as key, value
filters.append(lambda t, v: t == tag_filter_dict['Key'] and v == values[0])
else:
# Check key matches and value is one of the provided values
filters.append(lambda t, v: t == tag_filter_dict['Key'] and v in values)
def tag_filter(tag_list):
result = []
for tag in tag_list:
temp_result = []
for f in filters:
f_result = f(tag['Key'], tag['Value'])
temp_result.append(f_result)
result.append(all(temp_result))
return any(result)
# Do S3, resource type s3
if not resource_type_filters or 's3' in resource_type_filters:
for bucket in self.s3_backend.buckets.values():
tags = []
for tag in bucket.tags.tag_set.tags:
tags.append({'Key': tag.key, 'Value': tag.value})
if not tags or not tag_filter(tags): # Skip if no tags, or invalid filter
continue
yield {'ResourceARN': 'arn:aws:s3:::' + bucket.name, 'Tags': tags}
# EC2 tags
def get_ec2_tags(res_id):
result = []
for key, value in self.ec2_backend.tags.get(res_id, {}).items():
result.append({'Key': key, 'Value': value})
return result
# EC2 AMI, resource type ec2:image
if not resource_type_filters or 'ec2' in resource_type_filters or 'ec2:image' in resource_type_filters:
for ami in self.ec2_backend.amis.values():
tags = get_ec2_tags(ami.id)
if not tags or not tag_filter(tags): # Skip if no tags, or invalid filter
continue
yield {'ResourceARN': 'arn:aws:ec2:{0}::image/{1}'.format(self.region_name, ami.id), 'Tags': tags}
# EC2 Instance, resource type ec2:instance
if not resource_type_filters or 'ec2' in resource_type_filters or 'ec2:instance' in resource_type_filters:
for reservation in self.ec2_backend.reservations.values():
for instance in reservation.instances:
tags = get_ec2_tags(instance.id)
if not tags or not tag_filter(tags): # Skip if no tags, or invalid filter
continue
yield {'ResourceARN': 'arn:aws:ec2:{0}::instance/{1}'.format(self.region_name, instance.id), 'Tags': tags}
# EC2 NetworkInterface, resource type ec2:network-interface
if not resource_type_filters or 'ec2' in resource_type_filters or 'ec2:network-interface' in resource_type_filters:
for eni in self.ec2_backend.enis.values():
tags = get_ec2_tags(eni.id)
if not tags or not tag_filter(tags): # Skip if no tags, or invalid filter
continue
yield {'ResourceARN': 'arn:aws:ec2:{0}::network-interface/{1}'.format(self.region_name, eni.id), 'Tags': tags}
# TODO EC2 ReservedInstance
# EC2 SecurityGroup, resource type ec2:security-group
if not resource_type_filters or 'ec2' in resource_type_filters or 'ec2:security-group' in resource_type_filters:
for vpc in self.ec2_backend.groups.values():
for sg in vpc.values():
tags = get_ec2_tags(sg.id)
if not tags or not tag_filter(tags): # Skip if no tags, or invalid filter
continue
yield {'ResourceARN': 'arn:aws:ec2:{0}::security-group/{1}'.format(self.region_name, sg.id), 'Tags': tags}
# EC2 Snapshot, resource type ec2:snapshot
if not resource_type_filters or 'ec2' in resource_type_filters or 'ec2:snapshot' in resource_type_filters:
for snapshot in self.ec2_backend.snapshots.values():
tags = get_ec2_tags(snapshot.id)
if not tags or not tag_filter(tags): # Skip if no tags, or invalid filter
continue
yield {'ResourceARN': 'arn:aws:ec2:{0}::snapshot/{1}'.format(self.region_name, snapshot.id), 'Tags': tags}
# TODO EC2 SpotInstanceRequest
# EC2 Volume, resource type ec2:volume
if not resource_type_filters or 'ec2' in resource_type_filters or 'ec2:volume' in resource_type_filters:
for volume in self.ec2_backend.volumes.values():
tags = get_ec2_tags(volume.id)
if not tags or not tag_filter(tags): # Skip if no tags, or invalid filter
continue
yield {'ResourceARN': 'arn:aws:ec2:{0}::volume/{1}'.format(self.region_name, volume.id), 'Tags': tags}
# TODO add these to the keys and values functions / combine functions
# ELB
# EMR Cluster
# Glacier Vault
# Kinesis
# RDS Instance
# RDS Reserved Database Instance
# RDS Option Group
# RDS Parameter Group
# RDS Security Group
# RDS Snapshot
# RDS Subnet Group
# RDS Event Subscription
# RedShift Cluster
# RedShift Hardware security module (HSM) client certificate
# RedShift HSM connection
# RedShift Parameter group
# RedShift Snapshot
# RedShift Subnet group
# VPC
# VPC Customer Gateway
# VPC DHCP Option Set
# VPC Internet Gateway
# VPC Network ACL
# VPC Route Table
# VPC Subnet
# VPC Virtual Private Gateway
# VPC VPN Connection
def _get_tag_keys_generator(self):
# Look at
# https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html
# Do S3, resource type s3
for bucket in self.s3_backend.buckets.values():
for tag in bucket.tags.tag_set.tags:
yield tag.key
# EC2 tags
def get_ec2_keys(res_id):
result = []
for key in self.ec2_backend.tags.get(res_id, {}):
result.append(key)
return result
# EC2 AMI, resource type ec2:image
for ami in self.ec2_backend.amis.values():
for key in get_ec2_keys(ami.id):
yield key
# EC2 Instance, resource type ec2:instance
for reservation in self.ec2_backend.reservations.values():
for instance in reservation.instances:
for key in get_ec2_keys(instance.id):
yield key
# EC2 NetworkInterface, resource type ec2:network-interface
for eni in self.ec2_backend.enis.values():
for key in get_ec2_keys(eni.id):
yield key
# TODO EC2 ReservedInstance
# EC2 SecurityGroup, resource type ec2:security-group
for vpc in self.ec2_backend.groups.values():
for sg in vpc.values():
for key in get_ec2_keys(sg.id):
yield key
# EC2 Snapshot, resource type ec2:snapshot
for snapshot in self.ec2_backend.snapshots.values():
for key in get_ec2_keys(snapshot.id):
yield key
# TODO EC2 SpotInstanceRequest
# EC2 Volume, resource type ec2:volume
for volume in self.ec2_backend.volumes.values():
for key in get_ec2_keys(volume.id):
yield key
def _get_tag_values_generator(self, tag_key):
# Look at
# https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html
# Do S3, resource type s3
for bucket in self.s3_backend.buckets.values():
for tag in bucket.tags.tag_set.tags:
if tag.key == tag_key:
yield tag.value
# EC2 tags
def get_ec2_values(res_id):
result = []
for key, value in self.ec2_backend.tags.get(res_id, {}).items():
if key == tag_key:
result.append(value)
return result
# EC2 AMI, resource type ec2:image
for ami in self.ec2_backend.amis.values():
for value in get_ec2_values(ami.id):
yield value
# EC2 Instance, resource type ec2:instance
for reservation in self.ec2_backend.reservations.values():
for instance in reservation.instances:
for value in get_ec2_values(instance.id):
yield value
# EC2 NetworkInterface, resource type ec2:network-interface
for eni in self.ec2_backend.enis.values():
for value in get_ec2_values(eni.id):
yield value
# TODO EC2 ReservedInstance
# EC2 SecurityGroup, resource type ec2:security-group
for vpc in self.ec2_backend.groups.values():
for sg in vpc.values():
for value in get_ec2_values(sg.id):
yield value
# EC2 Snapshot, resource type ec2:snapshot
for snapshot in self.ec2_backend.snapshots.values():
for value in get_ec2_values(snapshot.id):
yield value
# TODO EC2 SpotInstanceRequest
# EC2 Volume, resource type ec2:volume
for volume in self.ec2_backend.volumes.values():
for value in get_ec2_values(volume.id):
yield value
def get_resources(self, pagination_token=None,
resources_per_page=50, tags_per_page=100,
tag_filters=None, resource_type_filters=None):
# Simple range checning
if 100 >= tags_per_page >= 500:
raise RESTError('InvalidParameterException', 'TagsPerPage must be between 100 and 500')
if 1 >= resources_per_page >= 50:
raise RESTError('InvalidParameterException', 'ResourcesPerPage must be between 1 and 50')
# If we have a token, go and find the respective generator, or error
if pagination_token:
if pagination_token not in self._pages:
raise RESTError('PaginationTokenExpiredException', 'Token does not exist')
generator = self._pages[pagination_token]['gen']
left_over = self._pages[pagination_token]['misc']
else:
generator = self._get_resources_generator(tag_filters=tag_filters,
resource_type_filters=resource_type_filters)
left_over = None
result = []
current_tags = 0
current_resources = 0
if left_over:
result.append(left_over)
current_resources += 1
current_tags += len(left_over['Tags'])
try:
while True:
# Generator format: [{'ResourceARN': str, 'Tags': [{'Key': str, 'Value': str]}, ...]
next_item = six.next(generator)
resource_tags = len(next_item['Tags'])
if current_resources >= resources_per_page:
break
if current_tags + resource_tags >= tags_per_page:
break
current_resources += 1
current_tags += resource_tags
result.append(next_item)
except StopIteration:
# Finished generator before invalidating page limiting constraints
return None, result
# Didn't hit StopIteration so there's stuff left in generator
new_token = str(uuid.uuid4())
self._pages[new_token] = {'gen': generator, 'misc': next_item}
# Token used up, might as well bin now, if you call it again your an idiot
if pagination_token:
del self._pages[pagination_token]
return new_token, result
def get_tag_keys(self, pagination_token=None):
if pagination_token:
if pagination_token not in self._pages:
raise RESTError('PaginationTokenExpiredException', 'Token does not exist')
generator = self._pages[pagination_token]['gen']
left_over = self._pages[pagination_token]['misc']
else:
generator = self._get_tag_keys_generator()
left_over = None
result = []
current_tags = 0
if left_over:
result.append(left_over)
current_tags += 1
try:
while True:
# Generator format: ['tag', 'tag', 'tag', ...]
next_item = six.next(generator)
if current_tags + 1 >= 128:
break
current_tags += 1
result.append(next_item)
except StopIteration:
# Finished generator before invalidating page limiting constraints
return None, result
# Didn't hit StopIteration so there's stuff left in generator
new_token = str(uuid.uuid4())
self._pages[new_token] = {'gen': generator, 'misc': next_item}
# Token used up, might as well bin now, if you call it again your an idiot
if pagination_token:
del self._pages[pagination_token]
return new_token, result
def get_tag_values(self, pagination_token, key):
if pagination_token:
if pagination_token not in self._pages:
raise RESTError('PaginationTokenExpiredException', 'Token does not exist')
generator = self._pages[pagination_token]['gen']
left_over = self._pages[pagination_token]['misc']
else:
generator = self._get_tag_values_generator(key)
left_over = None
result = []
current_tags = 0
if left_over:
result.append(left_over)
current_tags += 1
try:
while True:
# Generator format: ['value', 'value', 'value', ...]
next_item = six.next(generator)
if current_tags + 1 >= 128:
break
current_tags += 1
result.append(next_item)
except StopIteration:
# Finished generator before invalidating page limiting constraints
return None, result
# Didn't hit StopIteration so there's stuff left in generator
new_token = str(uuid.uuid4())
self._pages[new_token] = {'gen': generator, 'misc': next_item}
# Token used up, might as well bin now, if you call it again your an idiot
if pagination_token:
del self._pages[pagination_token]
return new_token, result
# These methods will be called from responses.py.
# They should call a tag function inside of the moto module
# that governs the resource, that way if the target module
# changes how tags are delt with theres less to change
# def tag_resources(self, resource_arn_list, tags):
# return failed_resources_map
#
# def untag_resources(self, resource_arn_list, tag_keys):
# return failed_resources_map
available_regions = boto3.session.Session().get_available_regions("resourcegroupstaggingapi")
resourcegroupstaggingapi_backends = {region: ResourceGroupsTaggingAPIBackend(region) for region in available_regions}

View File

@ -0,0 +1,97 @@
from __future__ import unicode_literals
from moto.core.responses import BaseResponse
from .models import resourcegroupstaggingapi_backends
import json
class ResourceGroupsTaggingAPIResponse(BaseResponse):
SERVICE_NAME = 'resourcegroupstaggingapi'
@property
def backend(self):
"""
Backend
:returns: Resource tagging api backend
:rtype: moto.resourcegroupstaggingapi.models.ResourceGroupsTaggingAPIBackend
"""
return resourcegroupstaggingapi_backends[self.region]
def get_resources(self):
pagination_token = self._get_param("PaginationToken")
tag_filters = self._get_param("TagFilters", [])
resources_per_page = self._get_int_param("ResourcesPerPage", 50)
tags_per_page = self._get_int_param("TagsPerPage", 100)
resource_type_filters = self._get_param("ResourceTypeFilters", [])
pagination_token, resource_tag_mapping_list = self.backend.get_resources(
pagination_token=pagination_token,
tag_filters=tag_filters,
resources_per_page=resources_per_page,
tags_per_page=tags_per_page,
resource_type_filters=resource_type_filters,
)
# Format tag response
response = {
'ResourceTagMappingList': resource_tag_mapping_list
}
if pagination_token:
response['PaginationToken'] = pagination_token
return json.dumps(response)
def get_tag_keys(self):
pagination_token = self._get_param("PaginationToken")
pagination_token, tag_keys = self.backend.get_tag_keys(
pagination_token=pagination_token,
)
response = {
'TagKeys': tag_keys
}
if pagination_token:
response['PaginationToken'] = pagination_token
return json.dumps(response)
def get_tag_values(self):
pagination_token = self._get_param("PaginationToken")
key = self._get_param("Key")
pagination_token, tag_values = self.backend.get_tag_values(
pagination_token=pagination_token,
key=key,
)
response = {
'TagValues': tag_values
}
if pagination_token:
response['PaginationToken'] = pagination_token
return json.dumps(response)
# These methods are all thats left to be implemented
# the response is already set up, all thats needed is
# the respective model function to be implemented.
#
# def tag_resources(self):
# resource_arn_list = self._get_list_prefix("ResourceARNList.member")
# tags = self._get_param("Tags")
# failed_resources_map = self.backend.tag_resources(
# resource_arn_list=resource_arn_list,
# tags=tags,
# )
#
# # failed_resources_map should be {'resource': {'ErrorCode': str, 'ErrorMessage': str, 'StatusCode': int}}
# return json.dumps({'FailedResourcesMap': failed_resources_map})
#
# def untag_resources(self):
# resource_arn_list = self._get_list_prefix("ResourceARNList.member")
# tag_keys = self._get_list_prefix("TagKeys.member")
# failed_resources_map = self.backend.untag_resources(
# resource_arn_list=resource_arn_list,
# tag_keys=tag_keys,
# )
#
# # failed_resources_map should be {'resource': {'ErrorCode': str, 'ErrorMessage': str, 'StatusCode': int}}
# return json.dumps({'FailedResourcesMap': failed_resources_map})

View File

@ -0,0 +1,10 @@
from __future__ import unicode_literals
from .responses import ResourceGroupsTaggingAPIResponse
url_bases = [
"https?://tagging.(.+).amazonaws.com",
]
url_paths = {
'{0}/$': ResourceGroupsTaggingAPIResponse.dispatch,
}

View File

@ -196,20 +196,20 @@ class FakeZone(BaseModel):
self.rrsets = [
record_set for record_set in self.rrsets if record_set.set_identifier != set_identifier]
def get_record_sets(self, type_filter, name_filter):
def get_record_sets(self, start_type, start_name):
record_sets = list(self.rrsets) # Copy the list
if type_filter:
if start_type:
record_sets = [
record_set for record_set in record_sets if record_set._type == type_filter]
if name_filter:
record_set for record_set in record_sets if record_set._type >= start_type]
if start_name:
record_sets = [
record_set for record_set in record_sets if record_set.name == name_filter]
record_set for record_set in record_sets if record_set.name >= start_name]
return record_sets
@property
def physical_resource_id(self):
return self.name
return self.id
@classmethod
def create_from_cloudformation_json(cls, resource_name, cloudformation_json, region_name):

View File

@ -151,9 +151,9 @@ class Route53(BaseResponse):
elif method == "GET":
querystring = parse_qs(parsed_url.query)
template = Template(LIST_RRSET_REPONSE)
type_filter = querystring.get("type", [None])[0]
name_filter = querystring.get("name", [None])[0]
record_sets = the_zone.get_record_sets(type_filter, name_filter)
start_type = querystring.get("type", [None])[0]
start_name = querystring.get("name", [None])[0]
record_sets = the_zone.get_record_sets(start_type, start_name)
return 200, headers, template.render(record_sets=record_sets)
def health_check_response(self, request, full_url, headers):

View File

@ -81,6 +81,9 @@ class FakeKey(BaseModel):
def restore(self, days):
self._expiry = datetime.datetime.utcnow() + datetime.timedelta(days)
def increment_version(self):
self._version_id += 1
@property
def etag(self):
if self._etag is None:
@ -323,19 +326,10 @@ class CorsRule(BaseModel):
def __init__(self, allowed_methods, allowed_origins, allowed_headers=None, expose_headers=None,
max_age_seconds=None):
# Python 2 and 3 have different string types for handling unicodes. Python 2 wants `basestring`,
# whereas Python 3 is OK with str. This causes issues with the XML parser, which returns
# unicode strings in Python 2. So, need to do this to make it work in both Python 2 and 3:
import sys
if sys.version_info >= (3, 0):
str_type = str
else:
str_type = basestring # noqa
self.allowed_methods = [allowed_methods] if isinstance(allowed_methods, str_type) else allowed_methods
self.allowed_origins = [allowed_origins] if isinstance(allowed_origins, str_type) else allowed_origins
self.allowed_headers = [allowed_headers] if isinstance(allowed_headers, str_type) else allowed_headers
self.exposed_headers = [expose_headers] if isinstance(expose_headers, str_type) else expose_headers
self.allowed_methods = [allowed_methods] if isinstance(allowed_methods, six.string_types) else allowed_methods
self.allowed_origins = [allowed_origins] if isinstance(allowed_origins, six.string_types) else allowed_origins
self.allowed_headers = [allowed_headers] if isinstance(allowed_headers, six.string_types) else allowed_headers
self.exposed_headers = [expose_headers] if isinstance(expose_headers, six.string_types) else expose_headers
self.max_age_seconds = max_age_seconds
@ -389,25 +383,16 @@ class FakeBucket(BaseModel):
if len(rules) > 100:
raise MalformedXML()
# Python 2 and 3 have different string types for handling unicodes. Python 2 wants `basestring`,
# whereas Python 3 is OK with str. This causes issues with the XML parser, which returns
# unicode strings in Python 2. So, need to do this to make it work in both Python 2 and 3:
import sys
if sys.version_info >= (3, 0):
str_type = str
else:
str_type = basestring # noqa
for rule in rules:
assert isinstance(rule["AllowedMethod"], list) or isinstance(rule["AllowedMethod"], str_type)
assert isinstance(rule["AllowedOrigin"], list) or isinstance(rule["AllowedOrigin"], str_type)
assert isinstance(rule["AllowedMethod"], list) or isinstance(rule["AllowedMethod"], six.string_types)
assert isinstance(rule["AllowedOrigin"], list) or isinstance(rule["AllowedOrigin"], six.string_types)
assert isinstance(rule.get("AllowedHeader", []), list) or isinstance(rule.get("AllowedHeader", ""),
str_type)
six.string_types)
assert isinstance(rule.get("ExposedHeader", []), list) or isinstance(rule.get("ExposedHeader", ""),
str_type)
assert isinstance(rule.get("MaxAgeSeconds", "0"), str_type)
six.string_types)
assert isinstance(rule.get("MaxAgeSeconds", "0"), six.string_types)
if isinstance(rule["AllowedMethod"], str_type):
if isinstance(rule["AllowedMethod"], six.string_types):
methods = [rule["AllowedMethod"]]
else:
methods = rule["AllowedMethod"]
@ -745,6 +730,10 @@ class S3Backend(BaseBackend):
if dest_key_name != src_key_name:
key = key.copy(dest_key_name)
dest_bucket.keys[dest_key_name] = key
# By this point, the destination key must exist, or KeyError
if dest_bucket.is_versioned:
dest_bucket.keys[dest_key_name].increment_version()
if storage is not None:
key.set_storage_class(storage)
if acl is not None:

View File

@ -8,6 +8,7 @@ from six.moves.urllib.parse import parse_qs, urlparse
import xmltodict
from moto.packages.httpretty.core import HTTPrettyRequest
from moto.core.responses import _TemplateEnvironmentMixin
from moto.s3bucket_path.utils import bucket_name_from_url as bucketpath_bucket_name_from_url, parse_key_name as bucketpath_parse_key_name, is_delete_keys as bucketpath_is_delete_keys
@ -54,8 +55,10 @@ class ResponseObject(_TemplateEnvironmentMixin):
if not host:
host = urlparse(request.url).netloc
if not host or host.startswith("localhost") or re.match(r"^[^.]+$", host):
# For localhost or local domain names, default to path-based buckets
if (not host or host.startswith('localhost') or
re.match(r'^[^.]+$', host) or re.match(r'^.*\.svc\.cluster\.local$', host)):
# Default to path-based buckets for (1) localhost, (2) local host names that do not
# contain a "." (e.g., Docker container host names), or (3) kubernetes host names
return False
match = re.match(r'^([^\[\]:]+)(:\d+)?$', host)
@ -113,7 +116,10 @@ class ResponseObject(_TemplateEnvironmentMixin):
return 200, {}, response.encode("utf-8")
else:
status_code, headers, response_content = response
return status_code, headers, response_content.encode("utf-8")
if not isinstance(response_content, six.binary_type):
response_content = response_content.encode("utf-8")
return status_code, headers, response_content
def _bucket_response(self, request, full_url, headers):
parsed_url = urlparse(full_url)
@ -139,6 +145,7 @@ class ResponseObject(_TemplateEnvironmentMixin):
body = b''
if isinstance(body, six.binary_type):
body = body.decode('utf-8')
body = u'{0}'.format(body).encode('utf-8')
if method == 'HEAD':
return self._bucket_response_head(bucket_name, headers)
@ -209,7 +216,7 @@ class ResponseObject(_TemplateEnvironmentMixin):
if not website_configuration:
template = self.response_template(S3_NO_BUCKET_WEBSITE_CONFIG)
return 404, {}, template.render(bucket_name=bucket_name)
return website_configuration
return 200, {}, website_configuration
elif 'acl' in querystring:
bucket = self.backend.get_bucket(bucket_name)
template = self.response_template(S3_OBJECT_ACL_RESPONSE)
@ -355,7 +362,7 @@ class ResponseObject(_TemplateEnvironmentMixin):
if not request.headers.get('Content-Length'):
return 411, {}, "Content-Length required"
if 'versioning' in querystring:
ver = re.search('<Status>([A-Za-z]+)</Status>', body)
ver = re.search('<Status>([A-Za-z]+)</Status>', body.decode())
if ver:
self.backend.set_bucket_versioning(bucket_name, ver.group(1))
template = self.response_template(S3_BUCKET_VERSIONING)
@ -444,7 +451,12 @@ class ResponseObject(_TemplateEnvironmentMixin):
def _bucket_response_post(self, request, body, bucket_name, headers):
if not request.headers.get('Content-Length'):
return 411, {}, "Content-Length required"
path = request.path if hasattr(request, 'path') else request.path_url
if isinstance(request, HTTPrettyRequest):
path = request.path
else:
path = request.full_path if hasattr(request, 'full_path') else request.path_url
if self.is_delete_keys(request, path, bucket_name):
return self._bucket_response_delete_keys(request, body, bucket_name, headers)
@ -454,6 +466,8 @@ class ResponseObject(_TemplateEnvironmentMixin):
form = request.form
else:
# HTTPretty, build new form object
body = body.decode()
form = {}
for kv in body.split('&'):
k, v = kv.split('=')
@ -764,7 +778,7 @@ class ResponseObject(_TemplateEnvironmentMixin):
return FakeTagging()
def _tagging_from_xml(self, xml):
parsed_xml = xmltodict.parse(xml)
parsed_xml = xmltodict.parse(xml, force_list={'Tag': True})
tags = []
for tag in parsed_xml['Tagging']['TagSet']['Tag']:

View File

@ -32,3 +32,11 @@ class SNSInvalidParameter(RESTError):
def __init__(self, message):
super(SNSInvalidParameter, self).__init__(
"InvalidParameter", message)
class InvalidParameterValue(RESTError):
code = 400
def __init__(self, message):
super(InvalidParameterValue, self).__init__(
"InvalidParameterValue", message)

View File

@ -7,6 +7,7 @@ import json
import boto.sns
import requests
import six
import re
from moto.compat import OrderedDict
from moto.core import BaseBackend, BaseModel
@ -15,7 +16,8 @@ from moto.sqs import sqs_backends
from moto.awslambda import lambda_backends
from .exceptions import (
SNSNotFoundError, DuplicateSnsEndpointError, SnsEndpointDisabled, SNSInvalidParameter
SNSNotFoundError, DuplicateSnsEndpointError, SnsEndpointDisabled, SNSInvalidParameter,
InvalidParameterValue
)
from .utils import make_arn_for_topic, make_arn_for_subscription
@ -146,7 +148,7 @@ class PlatformEndpoint(BaseModel):
if 'Token' not in self.attributes:
self.attributes['Token'] = self.token
if 'Enabled' not in self.attributes:
self.attributes['Enabled'] = True
self.attributes['Enabled'] = 'True'
@property
def enabled(self):
@ -193,9 +195,15 @@ class SNSBackend(BaseBackend):
self.sms_attributes.update(attrs)
def create_topic(self, name):
topic = Topic(name, self)
self.topics[topic.arn] = topic
return topic
fails_constraints = not re.match(r'^[a-zA-Z0-9](?:[A-Za-z0-9_-]{0,253}[a-zA-Z0-9])?$', name)
if fails_constraints:
raise InvalidParameterValue("Topic names must be made up of only uppercase and lowercase ASCII letters, numbers, underscores, and hyphens, and must be between 1 and 256 characters long.")
candidate_topic = Topic(name, self)
if candidate_topic.arn in self.topics:
return self.topics[candidate_topic.arn]
else:
self.topics[candidate_topic.arn] = candidate_topic
return candidate_topic
def _get_values_nexttoken(self, values_map, next_token=None):
if next_token is None:
@ -256,7 +264,10 @@ class SNSBackend(BaseBackend):
else:
return self._get_values_nexttoken(self.subscriptions, next_token)
def publish(self, arn, message):
def publish(self, arn, message, subject=None):
if subject is not None and len(subject) >= 100:
raise ValueError('Subject must be less than 100 characters')
try:
topic = self.get_topic(arn)
message_id = topic.publish(message)

View File

@ -239,6 +239,8 @@ class SNSResponse(BaseResponse):
target_arn = self._get_param('TargetArn')
topic_arn = self._get_param('TopicArn')
phone_number = self._get_param('PhoneNumber')
subject = self._get_param('Subject')
if phone_number is not None:
# Check phone is correct syntax (e164)
if not is_e164(phone_number):
@ -261,7 +263,12 @@ class SNSResponse(BaseResponse):
arn = topic_arn
message = self._get_param('Message')
message_id = self.backend.publish(arn, message)
try:
message_id = self.backend.publish(arn, message, subject=subject)
except ValueError as err:
error_response = self._error('InvalidParameter', str(err))
return error_response, dict(status=400)
if self.request_json:
return json.dumps({

View File

@ -16,3 +16,8 @@ class MessageAttributesInvalid(Exception):
def __init__(self, description):
self.description = description
class QueueDoesNotExist(Exception):
status_code = 404
description = "The specified queue does not exist for this wsdl version."

View File

@ -2,6 +2,7 @@ from __future__ import unicode_literals
import base64
import hashlib
import json
import re
import six
import struct
@ -9,10 +10,16 @@ from xml.sax.saxutils import escape
import boto.sqs
from moto.core.exceptions import RESTError
from moto.core import BaseBackend, BaseModel
from moto.core.utils import camelcase_to_underscores, get_random_message_id, unix_time, unix_time_millis
from .utils import generate_receipt_handle
from .exceptions import ReceiptHandleIsInvalid, MessageNotInflight, MessageAttributesInvalid
from .exceptions import (
MessageAttributesInvalid,
MessageNotInflight,
QueueDoesNotExist,
ReceiptHandleIsInvalid,
)
DEFAULT_ACCOUNT_ID = 123456789012
DEFAULT_SENDER_ID = "AIDAIT2UOQQY3AUEKVGXU"
@ -161,11 +168,14 @@ class Queue(BaseModel):
'ReceiveMessageWaitTimeSeconds',
'VisibilityTimeout',
'WaitTimeSeconds']
ALLOWED_PERMISSIONS = ('*', 'ChangeMessageVisibility', 'DeleteMessage', 'GetQueueAttributes',
'GetQueueUrl', 'ReceiveMessage', 'SendMessage')
def __init__(self, name, region, **kwargs):
self.name = name
self.visibility_timeout = int(kwargs.get('VisibilityTimeout', 30))
self.region = region
self.tags = {}
self._messages = []
@ -184,14 +194,42 @@ class Queue(BaseModel):
self.message_retention_period = int(kwargs.get('MessageRetentionPeriod', 86400 * 4)) # four days
self.queue_arn = 'arn:aws:sqs:{0}:123456789012:{1}'.format(self.region, self.name)
self.receive_message_wait_time_seconds = int(kwargs.get('ReceiveMessageWaitTimeSeconds', 0))
self.permissions = {}
# wait_time_seconds will be set to immediate return messages
self.wait_time_seconds = int(kwargs.get('WaitTimeSeconds', 0))
self.redrive_policy = {}
self.dead_letter_queue = None
if 'RedrivePolicy' in kwargs:
self._setup_dlq(kwargs['RedrivePolicy'])
# Check some conditions
if self.fifo_queue and not self.name.endswith('.fifo'):
raise MessageAttributesInvalid('Queue name must end in .fifo for FIFO queues')
def _setup_dlq(self, policy_json):
try:
self.redrive_policy = json.loads(policy_json)
except ValueError:
raise RESTError('InvalidParameterValue', 'Redrive policy does not contain valid json')
if 'deadLetterTargetArn' not in self.redrive_policy:
raise RESTError('InvalidParameterValue', 'Redrive policy does not contain deadLetterTargetArn')
if 'maxReceiveCount' not in self.redrive_policy:
raise RESTError('InvalidParameterValue', 'Redrive policy does not contain maxReceiveCount')
for queue in sqs_backends[self.region].queues.values():
if queue.queue_arn == self.redrive_policy['deadLetterTargetArn']:
self.dead_letter_queue = queue
if self.fifo_queue and not queue.fifo_queue:
raise RESTError('InvalidParameterCombination', 'Fifo queues cannot use non fifo dead letter queues')
break
else:
raise RESTError('AWS.SimpleQueueService.NonExistentQueue', 'Could not find DLQ for {0}'.format(self.redrive_policy['deadLetterTargetArn']))
@classmethod
def create_from_cloudformation_json(cls, resource_name, cloudformation_json, region_name):
properties = cloudformation_json['Properties']
@ -304,7 +342,10 @@ class SQSBackend(BaseBackend):
return qs
def get_queue(self, queue_name):
return self.queues.get(queue_name, None)
queue = self.queues.get(queue_name)
if queue is None:
raise QueueDoesNotExist()
return queue
def delete_queue(self, queue_name):
if queue_name in self.queues:
@ -374,9 +415,14 @@ class SQSBackend(BaseBackend):
time.sleep(0.001)
continue
messages_to_dlq = []
for message in queue.messages:
if not message.visible:
continue
if queue.dead_letter_queue is not None and message.approximate_receive_count >= queue.redrive_policy['maxReceiveCount']:
messages_to_dlq.append(message)
continue
message.mark_received(
visibility_timeout=visibility_timeout
)
@ -384,6 +430,10 @@ class SQSBackend(BaseBackend):
if len(result) >= count:
break
for message in messages_to_dlq:
queue._messages.remove(message)
queue.dead_letter_queue.add_message(message)
return result
def delete_message(self, queue_name, receipt_handle):
@ -411,6 +461,49 @@ class SQSBackend(BaseBackend):
queue = self.get_queue(queue_name)
queue._messages = []
def list_dead_letter_source_queues(self, queue_name):
dlq = self.get_queue(queue_name)
queues = []
for queue in self.queues.values():
if queue.dead_letter_queue is dlq:
queues.append(queue)
return queues
def add_permission(self, queue_name, actions, account_ids, label):
queue = self.get_queue(queue_name)
if actions is None or len(actions) == 0:
raise RESTError('InvalidParameterValue', 'Need at least one Action')
if account_ids is None or len(account_ids) == 0:
raise RESTError('InvalidParameterValue', 'Need at least one Account ID')
if not all([item in Queue.ALLOWED_PERMISSIONS for item in actions]):
raise RESTError('InvalidParameterValue', 'Invalid permissions')
queue.permissions[label] = (account_ids, actions)
def remove_permission(self, queue_name, label):
queue = self.get_queue(queue_name)
if label not in queue.permissions:
raise RESTError('InvalidParameterValue', 'Permission doesnt exist for the given label')
del queue.permissions[label]
def tag_queue(self, queue_name, tags):
queue = self.get_queue(queue_name)
queue.tags.update(tags)
def untag_queue(self, queue_name, tag_keys):
queue = self.get_queue(queue_name)
for key in tag_keys:
try:
del queue.tags[key]
except KeyError:
pass
sqs_backends = {}
for region in boto.sqs.regions():

View File

@ -2,13 +2,14 @@ from __future__ import unicode_literals
from six.moves.urllib.parse import urlparse
from moto.core.responses import BaseResponse
from moto.core.utils import camelcase_to_underscores
from moto.core.utils import camelcase_to_underscores, amz_crc32, amzn_request_id
from .utils import parse_message_attributes
from .models import sqs_backends
from .exceptions import (
MessageAttributesInvalid,
MessageNotInflight,
ReceiptHandleIsInvalid
QueueDoesNotExist,
ReceiptHandleIsInvalid,
)
MAXIMUM_VISIBILTY_TIMEOUT = 43200
@ -39,18 +40,23 @@ class SQSResponse(BaseResponse):
queue_name = self.path.split("/")[-1]
return queue_name
def _get_validated_visibility_timeout(self):
def _get_validated_visibility_timeout(self, timeout=None):
"""
:raises ValueError: If specified visibility timeout exceeds MAXIMUM_VISIBILTY_TIMEOUT
:raises TypeError: If visibility timeout was not specified
"""
visibility_timeout = int(self.querystring.get("VisibilityTimeout")[0])
if timeout is not None:
visibility_timeout = int(timeout)
else:
visibility_timeout = int(self.querystring.get("VisibilityTimeout")[0])
if visibility_timeout > MAXIMUM_VISIBILTY_TIMEOUT:
raise ValueError
return visibility_timeout
@amz_crc32 # crc last as request_id can edit XML
@amzn_request_id
def call_action(self):
status_code, headers, body = super(SQSResponse, self).call_action()
if status_code == 404:
@ -76,7 +82,12 @@ class SQSResponse(BaseResponse):
def get_queue_url(self):
request_url = urlparse(self.uri)
queue_name = self._get_param("QueueName")
queue = self.sqs_backend.get_queue(queue_name)
try:
queue = self.sqs_backend.get_queue(queue_name)
except QueueDoesNotExist as e:
return self._error('QueueDoesNotExist', e.description)
if queue:
template = self.response_template(GET_QUEUE_URL_RESPONSE)
return template.render(queue=queue, request_url=request_url)
@ -111,9 +122,56 @@ class SQSResponse(BaseResponse):
template = self.response_template(CHANGE_MESSAGE_VISIBILITY_RESPONSE)
return template.render()
def change_message_visibility_batch(self):
queue_name = self._get_queue_name()
entries = self._get_list_prefix('ChangeMessageVisibilityBatchRequestEntry')
success = []
error = []
for entry in entries:
try:
visibility_timeout = self._get_validated_visibility_timeout(entry['visibility_timeout'])
except ValueError:
error.append({
'Id': entry['id'],
'SenderFault': 'true',
'Code': 'InvalidParameterValue',
'Message': 'Visibility timeout invalid'
})
continue
try:
self.sqs_backend.change_message_visibility(
queue_name=queue_name,
receipt_handle=entry['receipt_handle'],
visibility_timeout=visibility_timeout
)
success.append(entry['id'])
except ReceiptHandleIsInvalid as e:
error.append({
'Id': entry['id'],
'SenderFault': 'true',
'Code': 'ReceiptHandleIsInvalid',
'Message': e.description
})
except MessageNotInflight as e:
error.append({
'Id': entry['id'],
'SenderFault': 'false',
'Code': 'AWS.SimpleQueueService.MessageNotInflight',
'Message': e.description
})
template = self.response_template(CHANGE_MESSAGE_VISIBILITY_BATCH_RESPONSE)
return template.render(success=success, errors=error)
def get_queue_attributes(self):
queue_name = self._get_queue_name()
queue = self.sqs_backend.get_queue(queue_name)
try:
queue = self.sqs_backend.get_queue(queue_name)
except QueueDoesNotExist as e:
return self._error('QueueDoesNotExist', e.description)
template = self.response_template(GET_QUEUE_ATTRIBUTES_RESPONSE)
return template.render(queue=queue)
@ -250,7 +308,11 @@ class SQSResponse(BaseResponse):
def receive_message(self):
queue_name = self._get_queue_name()
queue = self.sqs_backend.get_queue(queue_name)
try:
queue = self.sqs_backend.get_queue(queue_name)
except QueueDoesNotExist as e:
return self._error('QueueDoesNotExist', e.description)
try:
message_count = int(self.querystring.get("MaxNumberOfMessages")[0])
@ -272,8 +334,62 @@ class SQSResponse(BaseResponse):
messages = self.sqs_backend.receive_messages(
queue_name, message_count, wait_time, visibility_timeout)
template = self.response_template(RECEIVE_MESSAGE_RESPONSE)
output = template.render(messages=messages)
return output
return template.render(messages=messages)
def list_dead_letter_source_queues(self):
request_url = urlparse(self.uri)
queue_name = self._get_queue_name()
source_queue_urls = self.sqs_backend.list_dead_letter_source_queues(queue_name)
template = self.response_template(LIST_DEAD_LETTER_SOURCE_QUEUES_RESPONSE)
return template.render(queues=source_queue_urls, request_url=request_url)
def add_permission(self):
queue_name = self._get_queue_name()
actions = self._get_multi_param('ActionName')
account_ids = self._get_multi_param('AWSAccountId')
label = self._get_param('Label')
self.sqs_backend.add_permission(queue_name, actions, account_ids, label)
template = self.response_template(ADD_PERMISSION_RESPONSE)
return template.render()
def remove_permission(self):
queue_name = self._get_queue_name()
label = self._get_param('Label')
self.sqs_backend.remove_permission(queue_name, label)
template = self.response_template(REMOVE_PERMISSION_RESPONSE)
return template.render()
def tag_queue(self):
queue_name = self._get_queue_name()
tags = self._get_map_prefix('Tag', key_end='.Key', value_end='.Value')
self.sqs_backend.tag_queue(queue_name, tags)
template = self.response_template(TAG_QUEUE_RESPONSE)
return template.render()
def untag_queue(self):
queue_name = self._get_queue_name()
tag_keys = self._get_multi_param('TagKey')
self.sqs_backend.untag_queue(queue_name, tag_keys)
template = self.response_template(UNTAG_QUEUE_RESPONSE)
return template.render()
def list_queue_tags(self):
queue_name = self._get_queue_name()
queue = self.sqs_backend.get_queue(queue_name)
template = self.response_template(LIST_QUEUE_TAGS_RESPONSE)
return template.render(tags=queue.tags)
CREATE_QUEUE_RESPONSE = """<CreateQueueResponse>
@ -282,7 +398,7 @@ CREATE_QUEUE_RESPONSE = """<CreateQueueResponse>
<VisibilityTimeout>{{ queue.visibility_timeout }}</VisibilityTimeout>
</CreateQueueResult>
<ResponseMetadata>
<RequestId>7a62c49f-347e-4fc4-9331-6e8e7a96aa73</RequestId>
<RequestId>{{ requestid }}</RequestId>
</ResponseMetadata>
</CreateQueueResponse>"""
@ -291,7 +407,7 @@ GET_QUEUE_URL_RESPONSE = """<GetQueueUrlResponse>
<QueueUrl>{{ queue.url(request_url) }}</QueueUrl>
</GetQueueUrlResult>
<ResponseMetadata>
<RequestId>470a6f13-2ed9-4181-ad8a-2fdea142988e</RequestId>
<RequestId>{{ requestid }}</RequestId>
</ResponseMetadata>
</GetQueueUrlResponse>"""
@ -302,13 +418,13 @@ LIST_QUEUES_RESPONSE = """<ListQueuesResponse>
{% endfor %}
</ListQueuesResult>
<ResponseMetadata>
<RequestId>725275ae-0b9b-4762-b238-436d7c65a1ac</RequestId>
<RequestId>{{ requestid }}</RequestId>
</ResponseMetadata>
</ListQueuesResponse>"""
DELETE_QUEUE_RESPONSE = """<DeleteQueueResponse>
<ResponseMetadata>
<RequestId>6fde8d1e-52cd-4581-8cd9-c512f4c64223</RequestId>
<RequestId>{{ requestid }}</RequestId>
</ResponseMetadata>
</DeleteQueueResponse>"""
@ -322,13 +438,13 @@ GET_QUEUE_ATTRIBUTES_RESPONSE = """<GetQueueAttributesResponse>
{% endfor %}
</GetQueueAttributesResult>
<ResponseMetadata>
<RequestId>1ea71be5-b5a2-4f9d-b85a-945d8d08cd0b</RequestId>
<RequestId>{{ requestid }}</RequestId>
</ResponseMetadata>
</GetQueueAttributesResponse>"""
SET_QUEUE_ATTRIBUTE_RESPONSE = """<SetQueueAttributesResponse>
<ResponseMetadata>
<RequestId>e5cca473-4fc0-4198-a451-8abb94d02c75</RequestId>
<RequestId>{{ requestid }}</RequestId>
</ResponseMetadata>
</SetQueueAttributesResponse>"""
@ -345,7 +461,7 @@ SEND_MESSAGE_RESPONSE = """<SendMessageResponse>
</MessageId>
</SendMessageResult>
<ResponseMetadata>
<RequestId>27daac76-34dd-47df-bd01-1f6e873584a0</RequestId>
<RequestId>{{ requestid }}</RequestId>
</ResponseMetadata>
</SendMessageResponse>"""
@ -393,7 +509,7 @@ RECEIVE_MESSAGE_RESPONSE = """<ReceiveMessageResponse>
{% endfor %}
</ReceiveMessageResult>
<ResponseMetadata>
<RequestId>b6633655-283d-45b4-aee4-4e84e0ae6afa</RequestId>
<RequestId>{{ requestid }}</RequestId>
</ResponseMetadata>
</ReceiveMessageResponse>"""
@ -411,13 +527,13 @@ SEND_MESSAGE_BATCH_RESPONSE = """<SendMessageBatchResponse>
{% endfor %}
</SendMessageBatchResult>
<ResponseMetadata>
<RequestId>ca1ad5d0-8271-408b-8d0f-1351bf547e74</RequestId>
<RequestId>{{ requestid }}</RequestId>
</ResponseMetadata>
</SendMessageBatchResponse>"""
DELETE_MESSAGE_RESPONSE = """<DeleteMessageResponse>
<ResponseMetadata>
<RequestId>b5293cb5-d306-4a17-9048-b263635abe42</RequestId>
<RequestId>{{ requestid }}</RequestId>
</ResponseMetadata>
</DeleteMessageResponse>"""
@ -430,22 +546,92 @@ DELETE_MESSAGE_BATCH_RESPONSE = """<DeleteMessageBatchResponse>
{% endfor %}
</DeleteMessageBatchResult>
<ResponseMetadata>
<RequestId>d6f86b7a-74d1-4439-b43f-196a1e29cd85</RequestId>
<RequestId>{{ requestid }}</RequestId>
</ResponseMetadata>
</DeleteMessageBatchResponse>"""
CHANGE_MESSAGE_VISIBILITY_RESPONSE = """<ChangeMessageVisibilityResponse>
<ResponseMetadata>
<RequestId>6a7a282a-d013-4a59-aba9-335b0fa48bed</RequestId>
<RequestId>{{ requestid }}</RequestId>
</ResponseMetadata>
</ChangeMessageVisibilityResponse>"""
CHANGE_MESSAGE_VISIBILITY_BATCH_RESPONSE = """<ChangeMessageVisibilityBatchResponse>
<ChangeMessageVisibilityBatchResult>
{% for success_id in success %}
<ChangeMessageVisibilityBatchResultEntry>
<Id>{{ success_id }}</Id>
</ChangeMessageVisibilityBatchResultEntry>
{% endfor %}
{% for error_dict in errors %}
<BatchResultErrorEntry>
<Id>{{ error_dict['Id'] }}</Id>
<Code>{{ error_dict['Code'] }}</Code>
<Message>{{ error_dict['Message'] }}</Message>
<SenderFault>{{ error_dict['SenderFault'] }}</SenderFault>
</BatchResultErrorEntry>
{% endfor %}
</ChangeMessageVisibilityBatchResult>
<ResponseMetadata>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</ChangeMessageVisibilityBatchResponse>"""
PURGE_QUEUE_RESPONSE = """<PurgeQueueResponse>
<ResponseMetadata>
<RequestId>6fde8d1e-52cd-4581-8cd9-c512f4c64223</RequestId>
<RequestId>{{ requestid }}</RequestId>
</ResponseMetadata>
</PurgeQueueResponse>"""
LIST_DEAD_LETTER_SOURCE_QUEUES_RESPONSE = """<ListDeadLetterSourceQueuesResponse xmlns="http://queue.amazonaws.com/doc/2012-11-05/">
<ListDeadLetterSourceQueuesResult>
{% for queue in queues %}
<QueueUrl>{{ queue.url(request_url) }}</QueueUrl>
{% endfor %}
</ListDeadLetterSourceQueuesResult>
<ResponseMetadata>
<RequestId>8ffb921f-b85e-53d9-abcf-d8d0057f38fc</RequestId>
</ResponseMetadata>
</ListDeadLetterSourceQueuesResponse>"""
ADD_PERMISSION_RESPONSE = """<AddPermissionResponse>
<ResponseMetadata>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</AddPermissionResponse>"""
REMOVE_PERMISSION_RESPONSE = """<RemovePermissionResponse>
<ResponseMetadata>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</RemovePermissionResponse>"""
TAG_QUEUE_RESPONSE = """<TagQueueResponse>
<ResponseMetadata>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</TagQueueResponse>"""
UNTAG_QUEUE_RESPONSE = """<UntagQueueResponse>
<ResponseMetadata>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</UntagQueueResponse>"""
LIST_QUEUE_TAGS_RESPONSE = """<ListQueueTagsResponse>
<ListQueueTagsResult>
{% for key, value in tags.items() %}
<Tag>
<Key>{{ key }}</Key>
<Value>{{ value }}</Value>
</Tag>
{% endfor %}
</ListQueueTagsResult>
<ResponseMetadata>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</ListQueueTagsResponse>"""
ERROR_TOO_LONG_RESPONSE = """<ErrorResponse xmlns="http://queue.amazonaws.com/doc/2012-11-05/">
<Error>
<Type>Sender</Type>

View File

@ -5,13 +5,17 @@ from collections import defaultdict
from moto.core import BaseBackend, BaseModel
from moto.ec2 import ec2_backends
import time
class Parameter(BaseModel):
def __init__(self, name, value, type, description, keyid):
def __init__(self, name, value, type, description, keyid, last_modified_date, version):
self.name = name
self.type = type
self.description = description
self.keyid = keyid
self.last_modified_date = last_modified_date
self.version = version
if self.type == 'SecureString':
self.value = self.encrypt(value)
@ -33,8 +37,20 @@ class Parameter(BaseModel):
r = {
'Name': self.name,
'Type': self.type,
'Value': self.decrypt(self.value) if decrypt else self.value
'Value': self.decrypt(self.value) if decrypt else self.value,
'Version': self.version,
}
return r
def describe_response_object(self, decrypt=False):
r = self.response_object(decrypt)
r['LastModifiedDate'] = int(self.last_modified_date)
r['LastModifiedUser'] = 'N/A'
if self.description:
r['Description'] = self.description
if self.keyid:
r['KeyId'] = self.keyid
return r
@ -75,16 +91,39 @@ class SimpleSystemManagerBackend(BaseBackend):
result.append(self._parameters[name])
return result
def get_parameters_by_path(self, path, with_decryption, recursive):
"""Implement the get-parameters-by-path-API in the backend."""
result = []
# path could be with or without a trailing /. we handle this
# difference here.
path = path.rstrip('/') + '/'
for param in self._parameters:
if not param.startswith(path):
continue
if '/' in param[len(path) + 1:] and not recursive:
continue
result.append(self._parameters[param])
return result
def get_parameter(self, name, with_decryption):
if name in self._parameters:
return self._parameters[name]
return None
def put_parameter(self, name, description, value, type, keyid, overwrite):
if not overwrite and name in self._parameters:
return
previous_parameter = self._parameters.get(name)
version = 1
if previous_parameter:
version = previous_parameter.version + 1
if not overwrite:
return
last_modified_date = time.time()
self._parameters[name] = Parameter(
name, value, type, description, keyid)
name, value, type, description, keyid, last_modified_date, version)
def add_tags_to_resource(self, resource_type, resource_id, tags):
for key, value in tags.items():

View File

@ -81,6 +81,25 @@ class SimpleSystemManagerResponse(BaseResponse):
response['InvalidParameters'].append(name)
return json.dumps(response)
def get_parameters_by_path(self):
path = self._get_param('Path')
with_decryption = self._get_param('WithDecryption')
recursive = self._get_param('Recursive', False)
result = self.ssm_backend.get_parameters_by_path(
path, with_decryption, recursive
)
response = {
'Parameters': [],
}
for parameter in result:
param_data = parameter.response_object(with_decryption)
response['Parameters'].append(param_data)
return json.dumps(response)
def describe_parameters(self):
page_size = 10
filters = self._get_param('Filters')
@ -98,7 +117,7 @@ class SimpleSystemManagerResponse(BaseResponse):
end = token + page_size
for parameter in result[token:]:
param_data = parameter.response_object(False)
param_data = parameter.describe_response_object(False)
add = False
if filters:

View File

@ -1,6 +1,7 @@
from __future__ import unicode_literals
from .models import xray_backends
from ..core.models import base_decorator
from .mock_client import mock_xray_client, XRaySegment # noqa
xray_backend = xray_backends['us-east-1']
mock_xray = base_decorator(xray_backends)

83
moto/xray/mock_client.py Normal file
View File

@ -0,0 +1,83 @@
from functools import wraps
import os
from moto.xray import xray_backends
import aws_xray_sdk.core
from aws_xray_sdk.core.context import Context as AWSContext
from aws_xray_sdk.core.emitters.udp_emitter import UDPEmitter
class MockEmitter(UDPEmitter):
"""
Replaces the code that sends UDP to local X-Ray daemon
"""
def __init__(self, daemon_address='127.0.0.1:2000'):
address = os.getenv('AWS_XRAY_DAEMON_ADDRESS_YEAH_NOT_TODAY_MATE', daemon_address)
self._ip, self._port = self._parse_address(address)
def _xray_backend(self, region):
return xray_backends[region]
def send_entity(self, entity):
# Hack to get region
# region = entity.subsegments[0].aws['region']
# xray = self._xray_backend(region)
# TODO store X-Ray data, pretty sure X-Ray needs refactor for this
pass
def _send_data(self, data):
raise RuntimeError('Should not be running this')
def mock_xray_client(f):
"""
Mocks the X-Ray sdk by pwning its evil singleton with our methods
The X-Ray SDK has normally been imported and `patched()` called long before we start mocking.
This means the Context() will be very unhappy if an env var isnt present, so we set that, save
the old context, then supply our new context.
We also patch the Emitter by subclassing the UDPEmitter class replacing its methods and pushing
that itno the recorder instance.
"""
@wraps(f)
def _wrapped(*args, **kwargs):
print("Starting X-Ray Patch")
old_xray_context_var = os.environ.get('AWS_XRAY_CONTEXT_MISSING')
os.environ['AWS_XRAY_CONTEXT_MISSING'] = 'LOG_ERROR'
old_xray_context = aws_xray_sdk.core.xray_recorder._context
old_xray_emitter = aws_xray_sdk.core.xray_recorder._emitter
aws_xray_sdk.core.xray_recorder._context = AWSContext()
aws_xray_sdk.core.xray_recorder._emitter = MockEmitter()
try:
f(*args, **kwargs)
finally:
if old_xray_context_var is None:
del os.environ['AWS_XRAY_CONTEXT_MISSING']
else:
os.environ['AWS_XRAY_CONTEXT_MISSING'] = old_xray_context_var
aws_xray_sdk.core.xray_recorder._emitter = old_xray_emitter
aws_xray_sdk.core.xray_recorder._context = old_xray_context
return _wrapped
class XRaySegment(object):
"""
XRay is request oriented, when a request comes in, normally middleware like django (or automatically in lambda) will mark
the start of a segment, this stay open during the lifetime of the request. During that time subsegments may be generated
by calling other SDK aware services or using some boto functions. Once the request is finished, middleware will also stop
the segment, thus causing it to be emitted via UDP.
During testing we're going to have to control the start and end of a segment via context managers.
"""
def __enter__(self):
aws_xray_sdk.core.xray_recorder.begin_segment(name='moto_mock', traceid=None, parent_id=None, sampling=1)
return self
def __exit__(self, exc_type, exc_val, exc_tb):
aws_xray_sdk.core.xray_recorder.end_segment()

View File

@ -3,7 +3,7 @@ mock
nose
sure==1.2.24
coverage
flake8
flake8==3.4.1
freezegun
flask
boto>=2.45.0

40
scripts/get_amis.py Normal file
View File

@ -0,0 +1,40 @@
import boto3
import json
# Taken from free tear list when creating an instance
instances = [
'ami-760aaa0f', 'ami-bb9a6bc2', 'ami-35e92e4c', 'ami-785db401', 'ami-b7e93bce', 'ami-dca37ea5', 'ami-999844e0',
'ami-9b32e8e2', 'ami-f8e54081', 'ami-bceb39c5', 'ami-03cf127a', 'ami-1ecc1e67', 'ami-c2ff2dbb', 'ami-12c6146b',
'ami-d1cb19a8', 'ami-61db0918', 'ami-56ec3e2f', 'ami-84ee3cfd', 'ami-86ee3cff', 'ami-f0e83a89', 'ami-1f12c066',
'ami-afee3cd6', 'ami-1812c061', 'ami-77ed3f0e', 'ami-3bf32142', 'ami-6ef02217', 'ami-f4cf1d8d', 'ami-3df32144',
'ami-c6f321bf', 'ami-24f3215d', 'ami-fa7cdd89', 'ami-1e749f67', 'ami-a9cc1ed0', 'ami-8104a4f8'
]
client = boto3.client('ec2', region_name='eu-west-1')
test = client.describe_images(ImageIds=instances)
result = []
for image in test['Images']:
try:
tmp = {
'ami_id': image['ImageId'],
'name': image['Name'],
'description': image['Description'],
'owner_id': image['OwnerId'],
'public': image['Public'],
'virtualization_type': image['VirtualizationType'],
'architecture': image['Architecture'],
'state': image['State'],
'platform': image.get('Platform'),
'image_type': image['ImageType'],
'hypervisor': image['Hypervisor'],
'root_device_name': image['RootDeviceName'],
'root_device_type': image['RootDeviceType'],
'sriov': image.get('SriovNetSupport', 'simple')
}
result.append(tmp)
except Exception as err:
pass
print(json.dumps(result, indent=2))

View File

@ -44,7 +44,7 @@ def calculate_implementation_coverage():
def print_implementation_coverage():
coverage = calculate_implementation_coverage()
for service_name in coverage:
for service_name in sorted(coverage):
implemented = coverage.get(service_name)['implemented']
not_implemented = coverage.get(service_name)['not_implemented']
operations = sorted(implemented + not_implemented)
@ -56,14 +56,14 @@ def print_implementation_coverage():
else:
percentage_implemented = 0
print("-----------------------")
print("{} - {}% implemented".format(service_name, percentage_implemented))
print("-----------------------")
print("")
print("## {} - {}% implemented".format(service_name, percentage_implemented))
for op in operations:
if op in implemented:
print("[X] {}".format(op))
print("- [X] {}".format(op))
else:
print("[ ] {}".format(op))
print("- [ ] {}".format(op))
if __name__ == '__main__':
print_implementation_coverage()

View File

@ -81,12 +81,14 @@ def select_service_and_operation():
raise click.Abort()
return service_name, operation_name
def get_escaped_service(service):
return service.replace('-', '')
def get_lib_dir(service):
return os.path.join('moto', service)
return os.path.join('moto', get_escaped_service(service))
def get_test_dir(service):
return os.path.join('tests', 'test_{}'.format(service))
return os.path.join('tests', 'test_{}'.format(get_escaped_service(service)))
def render_template(tmpl_dir, tmpl_filename, context, service, alt_filename=None):
@ -117,7 +119,7 @@ def append_mock_to_init_py(service):
filtered_lines = [_ for _ in lines if re.match('^from.*mock.*$', _)]
last_import_line_index = lines.index(filtered_lines[-1])
new_line = 'from .{} import mock_{} # flake8: noqa'.format(service, service)
new_line = 'from .{} import mock_{} # flake8: noqa'.format(get_escaped_service(service), get_escaped_service(service))
lines.insert(last_import_line_index + 1, new_line)
body = '\n'.join(lines) + '\n'
@ -135,7 +137,7 @@ def append_mock_import_to_backends_py(service):
filtered_lines = [_ for _ in lines if re.match('^from.*backends.*$', _)]
last_import_line_index = lines.index(filtered_lines[-1])
new_line = 'from moto.{} import {}_backends'.format(service, service)
new_line = 'from moto.{} import {}_backends'.format(get_escaped_service(service), get_escaped_service(service))
lines.insert(last_import_line_index + 1, new_line)
body = '\n'.join(lines) + '\n'
@ -147,13 +149,12 @@ def append_mock_dict_to_backends_py(service):
with open(path) as f:
lines = [_.replace('\n', '') for _ in f.readlines()]
# 'xray': xray_backends
if any(_ for _ in lines if re.match(".*'{}': {}_backends.*".format(service, service), _)):
return
filtered_lines = [_ for _ in lines if re.match(".*'.*':.*_backends.*", _)]
last_elem_line_index = lines.index(filtered_lines[-1])
new_line = " '{}': {}_backends,".format(service, service)
new_line = " '{}': {}_backends,".format(service, get_escaped_service(service))
prev_line = lines[last_elem_line_index]
if not prev_line.endswith('{') and not prev_line.endswith(','):
lines[last_elem_line_index] += ','
@ -166,8 +167,8 @@ def append_mock_dict_to_backends_py(service):
def initialize_service(service, operation, api_protocol):
"""create lib and test dirs if not exist
"""
lib_dir = os.path.join('moto', service)
test_dir = os.path.join('tests', 'test_{}'.format(service))
lib_dir = get_lib_dir(service)
test_dir = get_test_dir(service)
print_progress('Initializing service', service, 'green')
@ -178,7 +179,9 @@ def initialize_service(service, operation, api_protocol):
tmpl_context = {
'service': service,
'service_class': service_class,
'endpoint_prefix': endpoint_prefix
'endpoint_prefix': endpoint_prefix,
'api_protocol': api_protocol,
'escaped_service': get_escaped_service(service)
}
# initialize service directory
@ -202,7 +205,7 @@ def initialize_service(service, operation, api_protocol):
os.makedirs(test_dir)
tmpl_dir = os.path.join(TEMPLATE_DIR, 'test')
for tmpl_filename in os.listdir(tmpl_dir):
alt_filename = 'test_{}.py'.format(service) if tmpl_filename == 'test_service.py.j2' else None
alt_filename = 'test_{}.py'.format(get_escaped_service(service)) if tmpl_filename == 'test_service.py.j2' else None
render_template(
tmpl_dir, tmpl_filename, tmpl_context, service, alt_filename
)
@ -212,9 +215,16 @@ def initialize_service(service, operation, api_protocol):
append_mock_import_to_backends_py(service)
append_mock_dict_to_backends_py(service)
def to_upper_camel_case(s):
return ''.join([_.title() for _ in s.split('_')])
def to_lower_camel_case(s):
words = s.split('_')
return ''.join(words[:1] + [_.title() for _ in words[1:]])
def to_snake_case(s):
s1 = re.sub('(.)([A-Z][a-z]+)', r'\1_\2', s)
return re.sub('([a-z0-9])([A-Z])', r'\1_\2', s1).lower()
@ -229,25 +239,28 @@ def get_function_in_responses(service, operation, protocol):
aws_operation_name = to_upper_camel_case(operation)
op_model = client._service_model.operation_model(aws_operation_name)
outputs = op_model.output_shape.members
if not hasattr(op_model.output_shape, 'members'):
outputs = {}
else:
outputs = op_model.output_shape.members
inputs = op_model.input_shape.members
input_names = [to_snake_case(_) for _ in inputs.keys() if _ not in INPUT_IGNORED_IN_BACKEND]
output_names = [to_snake_case(_) for _ in outputs.keys() if _ not in OUTPUT_IGNORED_IN_BACKEND]
body = 'def {}(self):\n'.format(operation)
body = '\ndef {}(self):\n'.format(operation)
for input_name, input_type in inputs.items():
type_name = input_type.type_name
if type_name == 'integer':
arg_line_tmpl = ' {} = _get_int_param("{}")\n'
arg_line_tmpl = ' {} = self._get_int_param("{}")\n'
elif type_name == 'list':
arg_line_tmpl = ' {} = self._get_list_prefix("{}.member")\n'
else:
arg_line_tmpl = ' {} = self._get_param("{}")\n'
body += arg_line_tmpl.format(to_snake_case(input_name), input_name)
if output_names:
body += ' {} = self.{}_backend.{}(\n'.format(','.join(output_names), service, operation)
body += ' {} = self.{}_backend.{}(\n'.format(', '.join(output_names), get_escaped_service(service), operation)
else:
body += ' self.{}_backend.{}(\n'.format(service, operation)
body += ' self.{}_backend.{}(\n'.format(get_escaped_service(service), operation)
for input_name in input_names:
body += ' {}={},\n'.format(input_name, input_name)
@ -255,11 +268,11 @@ def get_function_in_responses(service, operation, protocol):
if protocol == 'query':
body += ' template = self.response_template({}_TEMPLATE)\n'.format(operation.upper())
body += ' return template.render({})\n'.format(
','.join(['{}={}'.format(_, _) for _ in output_names])
', '.join(['{}={}'.format(_, _) for _ in output_names])
)
elif protocol == 'json':
body += ' # TODO: adjust reponse\n'
body += ' return json.dumps({})\n'.format(','.join(['{}={}'.format(_, _) for _ in output_names]))
elif protocol in ['json', 'rest-json']:
body += ' # TODO: adjust response\n'
body += ' return json.dumps(dict({}))\n'.format(', '.join(['{}={}'.format(to_lower_camel_case(_), _) for _ in output_names]))
return body
@ -272,7 +285,10 @@ def get_function_in_models(service, operation):
aws_operation_name = to_upper_camel_case(operation)
op_model = client._service_model.operation_model(aws_operation_name)
inputs = op_model.input_shape.members
outputs = op_model.output_shape.members
if not hasattr(op_model.output_shape, 'members'):
outputs = {}
else:
outputs = op_model.output_shape.members
input_names = [to_snake_case(_) for _ in inputs.keys() if _ not in INPUT_IGNORED_IN_BACKEND]
output_names = [to_snake_case(_) for _ in outputs.keys() if _ not in OUTPUT_IGNORED_IN_BACKEND]
if input_names:
@ -280,7 +296,7 @@ def get_function_in_models(service, operation):
else:
body = 'def {}(self)\n'
body += ' # implement here\n'
body += ' return {}\n'.format(', '.join(output_names))
body += ' return {}\n\n'.format(', '.join(output_names))
return body
@ -388,13 +404,13 @@ def insert_code_to_class(path, base_class, new_code):
f.write(body)
def insert_url(service, operation):
def insert_url(service, operation, api_protocol):
client = boto3.client(service)
service_class = client.__class__.__name__
aws_operation_name = to_upper_camel_case(operation)
uri = client._service_model.operation_model(aws_operation_name).http['requestUri']
path = os.path.join(os.path.dirname(__file__), '..', 'moto', service, 'urls.py')
path = os.path.join(os.path.dirname(__file__), '..', 'moto', get_escaped_service(service), 'urls.py')
with open(path) as f:
lines = [_.replace('\n', '') for _ in f.readlines()]
@ -413,81 +429,55 @@ def insert_url(service, operation):
if not prev_line.endswith('{') and not prev_line.endswith(','):
lines[last_elem_line_index] += ','
new_line = " '{0}%s$': %sResponse.dispatch," % (
uri, service_class
)
# generate url pattern
if api_protocol == 'rest-json':
new_line = " '{0}/.*$': response.dispatch,"
else:
new_line = " '{0}%s$': %sResponse.dispatch," % (
uri, service_class
)
if new_line in lines:
return
lines.insert(last_elem_line_index + 1, new_line)
body = '\n'.join(lines) + '\n'
with open(path, 'w') as f:
f.write(body)
def insert_query_codes(service, operation):
func_in_responses = get_function_in_responses(service, operation, 'query')
def insert_codes(service, operation, api_protocol):
func_in_responses = get_function_in_responses(service, operation, api_protocol)
func_in_models = get_function_in_models(service, operation)
template = get_response_query_template(service, operation)
# edit responses.py
responses_path = 'moto/{}/responses.py'.format(service)
responses_path = 'moto/{}/responses.py'.format(get_escaped_service(service))
print_progress('inserting code', responses_path, 'green')
insert_code_to_class(responses_path, BaseResponse, func_in_responses)
# insert template
with open(responses_path) as f:
lines = [_[:-1] for _ in f.readlines()]
lines += template.splitlines()
with open(responses_path, 'w') as f:
f.write('\n'.join(lines))
if api_protocol == 'query':
template = get_response_query_template(service, operation)
with open(responses_path) as f:
lines = [_[:-1] for _ in f.readlines()]
lines += template.splitlines()
with open(responses_path, 'w') as f:
f.write('\n'.join(lines))
# edit models.py
models_path = 'moto/{}/models.py'.format(service)
models_path = 'moto/{}/models.py'.format(get_escaped_service(service))
print_progress('inserting code', models_path, 'green')
insert_code_to_class(models_path, BaseBackend, func_in_models)
# edit urls.py
insert_url(service, operation)
insert_url(service, operation, api_protocol)
def insert_json_codes(service, operation):
func_in_responses = get_function_in_responses(service, operation, 'json')
func_in_models = get_function_in_models(service, operation)
# edit responses.py
responses_path = 'moto/{}/responses.py'.format(service)
print_progress('inserting code', responses_path, 'green')
insert_code_to_class(responses_path, BaseResponse, func_in_responses)
# edit models.py
models_path = 'moto/{}/models.py'.format(service)
print_progress('inserting code', models_path, 'green')
insert_code_to_class(models_path, BaseBackend, func_in_models)
# edit urls.py
insert_url(service, operation)
def insert_restjson_codes(service, operation):
func_in_models = get_function_in_models(service, operation)
print_progress('skipping inserting code to responses.py', "dont't know how to implement", 'yellow')
# edit models.py
models_path = 'moto/{}/models.py'.format(service)
print_progress('inserting code', models_path, 'green')
insert_code_to_class(models_path, BaseBackend, func_in_models)
# edit urls.py
insert_url(service, operation)
@click.command()
def main():
service, operation = select_service_and_operation()
api_protocol = boto3.client(service)._service_model.metadata['protocol']
initialize_service(service, operation, api_protocol)
if api_protocol == 'query':
insert_query_codes(service, operation)
elif api_protocol == 'json':
insert_json_codes(service, operation)
elif api_protocol == 'rest-json':
insert_restjson_codes(service, operation)
if api_protocol in ['query', 'json', 'rest-json']:
insert_codes(service, operation, api_protocol)
else:
print_progress('skip inserting code', 'api protocol "{}" is not supported'.format(api_protocol), 'yellow')

View File

@ -1,7 +1,7 @@
from __future__ import unicode_literals
from .models import {{ service }}_backends
from .models import {{ escaped_service }}_backends
from ..core.models import base_decorator
{{ service }}_backend = {{ service }}_backends['us-east-1']
mock_{{ service }} = base_decorator({{ service }}_backends)
{{ escaped_service }}_backend = {{ escaped_service }}_backends['us-east-1']
mock_{{ escaped_service }} = base_decorator({{ escaped_service }}_backends)

View File

@ -17,4 +17,4 @@ class {{ service_class }}Backend(BaseBackend):
available_regions = boto3.session.Session().get_available_regions("{{ service }}")
{{ service }}_backends = {region: {{ service_class }}Backend for region in available_regions}
{{ escaped_service }}_backends = {region: {{ service_class }}Backend(region) for region in available_regions}

View File

@ -1,12 +1,14 @@
from __future__ import unicode_literals
from moto.core.responses import BaseResponse
from .models import {{ service }}_backends
from .models import {{ escaped_service }}_backends
import json
class {{ service_class }}Response(BaseResponse):
SERVICE_NAME = '{{ service }}'
@property
def {{ service }}_backend(self):
return {{ service }}_backends[self.region]
def {{ escaped_service }}_backend(self):
return {{ escaped_service }}_backends[self.region]
# add methods from here

View File

@ -5,5 +5,9 @@ url_bases = [
"https?://{{ endpoint_prefix }}.(.+).amazonaws.com",
]
{% if api_protocol == 'rest-json' %}
response = {{ service_class }}Response()
{% endif %}
url_paths = {
}

View File

@ -3,14 +3,14 @@ from __future__ import unicode_literals
import sure # noqa
import moto.server as server
from moto import mock_{{ service }}
from moto import mock_{{ escaped_service }}
'''
Test the different server responses
'''
@mock_{{ service }}
def test_{{ service }}_list():
@mock_{{ escaped_service }}
def test_{{ escaped_service }}_list():
backend = server.create_backend_app("{{ service }}")
test_client = backend.test_client()
# do test

View File

@ -2,10 +2,10 @@ from __future__ import unicode_literals
import boto3
import sure # noqa
from moto import mock_{{ service }}
from moto import mock_{{ escaped_service }}
@mock_{{ service }}
@mock_{{ escaped_service }}
def test_list():
# do test
pass

View File

@ -9,6 +9,7 @@ install_requires = [
"Jinja2>=2.8",
"boto>=2.36.0",
"boto3>=1.2.1",
"botocore>=1.7.12",
"cookies",
"cryptography>=2.0.0",
"requests>=2.5",
@ -19,7 +20,9 @@ install_requires = [
"pytz",
"python-dateutil<3.0.0,>=2.1",
"mock",
"docker>=2.5.1"
"docker>=2.5.1",
"jsondiff==1.1.1",
"aws-xray-sdk>=0.93",
]
extras_require = {
@ -36,7 +39,7 @@ else:
setup(
name='moto',
version='1.1.21',
version='1.1.25',
description='A library that allows your python tests to easily'
' mock out the boto library',
author='Steve Pulec',

View File

@ -4,6 +4,7 @@ import os
import boto3
from freezegun import freeze_time
import sure # noqa
import uuid
from botocore.exceptions import ClientError
@ -281,12 +282,37 @@ def test_resend_validation_email_invalid():
def test_request_certificate():
client = boto3.client('acm', region_name='eu-central-1')
token = str(uuid.uuid4())
resp = client.request_certificate(
DomainName='google.com',
IdempotencyToken=token,
SubjectAlternativeNames=['google.com', 'www.google.com', 'mail.google.com'],
)
resp.should.contain('CertificateArn')
arn = resp['CertificateArn']
resp = client.request_certificate(
DomainName='google.com',
IdempotencyToken=token,
SubjectAlternativeNames=['google.com', 'www.google.com', 'mail.google.com'],
)
resp['CertificateArn'].should.equal(arn)
@mock_acm
def test_request_certificate_no_san():
client = boto3.client('acm', region_name='eu-central-1')
resp = client.request_certificate(
DomainName='google.com'
)
resp.should.contain('CertificateArn')
resp2 = client.describe_certificate(
CertificateArn=resp['CertificateArn']
)
resp2.should.contain('Certificate')
# # Also tests the SAN code
# # requires Pull: https://github.com/spulec/freezegun/pull/210

View File

@ -8,7 +8,7 @@ from boto.ec2.autoscale import Tag
import boto.ec2.elb
import sure # noqa
from moto import mock_autoscaling, mock_ec2_deprecated, mock_elb_deprecated, mock_autoscaling_deprecated, mock_ec2
from moto import mock_autoscaling, mock_ec2_deprecated, mock_elb_deprecated, mock_elb, mock_autoscaling_deprecated, mock_ec2
from tests.helpers import requires_boto_gte
@ -311,6 +311,7 @@ def test_autoscaling_group_describe_instances():
instances = list(conn.get_all_autoscaling_instances())
instances.should.have.length_of(2)
instances[0].launch_config_name.should.equal('tester')
instances[0].health_status.should.equal('Healthy')
autoscale_instance_ids = [instance.instance_id for instance in instances]
ec2_conn = boto.connect_ec2()
@ -484,6 +485,173 @@ Boto3
'''
@mock_autoscaling
@mock_elb
def test_describe_load_balancers():
INSTANCE_COUNT = 2
elb_client = boto3.client('elb', region_name='us-east-1')
elb_client.create_load_balancer(
LoadBalancerName='my-lb',
Listeners=[
{'Protocol': 'tcp', 'LoadBalancerPort': 80, 'InstancePort': 8080}],
AvailabilityZones=['us-east-1a', 'us-east-1b']
)
client = boto3.client('autoscaling', region_name='us-east-1')
client.create_launch_configuration(
LaunchConfigurationName='test_launch_configuration'
)
client.create_auto_scaling_group(
AutoScalingGroupName='test_asg',
LaunchConfigurationName='test_launch_configuration',
LoadBalancerNames=['my-lb'],
MinSize=0,
MaxSize=INSTANCE_COUNT,
DesiredCapacity=INSTANCE_COUNT,
Tags=[{
"ResourceId": 'test_asg',
"Key": 'test_key',
"Value": 'test_value',
"PropagateAtLaunch": True
}]
)
response = client.describe_load_balancers(AutoScalingGroupName='test_asg')
list(response['LoadBalancers']).should.have.length_of(1)
response['LoadBalancers'][0]['LoadBalancerName'].should.equal('my-lb')
@mock_autoscaling
@mock_elb
def test_create_elb_and_autoscaling_group_no_relationship():
INSTANCE_COUNT = 2
ELB_NAME = 'my-elb'
elb_client = boto3.client('elb', region_name='us-east-1')
elb_client.create_load_balancer(
LoadBalancerName=ELB_NAME,
Listeners=[
{'Protocol': 'tcp', 'LoadBalancerPort': 80, 'InstancePort': 8080}],
AvailabilityZones=['us-east-1a', 'us-east-1b']
)
client = boto3.client('autoscaling', region_name='us-east-1')
client.create_launch_configuration(
LaunchConfigurationName='test_launch_configuration'
)
client.create_auto_scaling_group(
AutoScalingGroupName='test_asg',
LaunchConfigurationName='test_launch_configuration',
MinSize=0,
MaxSize=INSTANCE_COUNT,
DesiredCapacity=INSTANCE_COUNT,
)
# autoscaling group and elb should have no relationship
response = client.describe_load_balancers(
AutoScalingGroupName='test_asg'
)
list(response['LoadBalancers']).should.have.length_of(0)
response = elb_client.describe_load_balancers(
LoadBalancerNames=[ELB_NAME]
)
list(response['LoadBalancerDescriptions'][0]['Instances']).should.have.length_of(0)
@mock_autoscaling
@mock_elb
def test_attach_load_balancer():
INSTANCE_COUNT = 2
elb_client = boto3.client('elb', region_name='us-east-1')
elb_client.create_load_balancer(
LoadBalancerName='my-lb',
Listeners=[
{'Protocol': 'tcp', 'LoadBalancerPort': 80, 'InstancePort': 8080}],
AvailabilityZones=['us-east-1a', 'us-east-1b']
)
client = boto3.client('autoscaling', region_name='us-east-1')
client.create_launch_configuration(
LaunchConfigurationName='test_launch_configuration'
)
client.create_auto_scaling_group(
AutoScalingGroupName='test_asg',
LaunchConfigurationName='test_launch_configuration',
MinSize=0,
MaxSize=INSTANCE_COUNT,
DesiredCapacity=INSTANCE_COUNT,
Tags=[{
"ResourceId": 'test_asg',
"Key": 'test_key',
"Value": 'test_value',
"PropagateAtLaunch": True
}]
)
response = client.attach_load_balancers(
AutoScalingGroupName='test_asg',
LoadBalancerNames=['my-lb'])
response['ResponseMetadata']['HTTPStatusCode'].should.equal(200)
response = elb_client.describe_load_balancers(
LoadBalancerNames=['my-lb']
)
list(response['LoadBalancerDescriptions'][0]['Instances']).should.have.length_of(INSTANCE_COUNT)
response = client.describe_auto_scaling_groups(
AutoScalingGroupNames=["test_asg"]
)
list(response['AutoScalingGroups'][0]['LoadBalancerNames']).should.have.length_of(1)
@mock_autoscaling
@mock_elb
def test_detach_load_balancer():
INSTANCE_COUNT = 2
elb_client = boto3.client('elb', region_name='us-east-1')
elb_client.create_load_balancer(
LoadBalancerName='my-lb',
Listeners=[
{'Protocol': 'tcp', 'LoadBalancerPort': 80, 'InstancePort': 8080}],
AvailabilityZones=['us-east-1a', 'us-east-1b']
)
client = boto3.client('autoscaling', region_name='us-east-1')
client.create_launch_configuration(
LaunchConfigurationName='test_launch_configuration'
)
client.create_auto_scaling_group(
AutoScalingGroupName='test_asg',
LaunchConfigurationName='test_launch_configuration',
LoadBalancerNames=['my-lb'],
MinSize=0,
MaxSize=INSTANCE_COUNT,
DesiredCapacity=INSTANCE_COUNT,
Tags=[{
"ResourceId": 'test_asg',
"Key": 'test_key',
"Value": 'test_value',
"PropagateAtLaunch": True
}]
)
response = client.detach_load_balancers(
AutoScalingGroupName='test_asg',
LoadBalancerNames=['my-lb'])
response['ResponseMetadata']['HTTPStatusCode'].should.equal(200)
response = elb_client.describe_load_balancers(
LoadBalancerNames=['my-lb']
)
list(response['LoadBalancerDescriptions'][0]['Instances']).should.have.length_of(0)
response = client.describe_load_balancers(AutoScalingGroupName='test_asg')
list(response['LoadBalancers']).should.have.length_of(0)
@mock_autoscaling
def test_create_autoscaling_group_boto3():
client = boto3.client('autoscaling', region_name='us-east-1')
@ -653,3 +821,200 @@ def test_autoscaling_describe_policies_boto3():
response['ScalingPolicies'].should.have.length_of(1)
response['ScalingPolicies'][0][
'PolicyName'].should.equal('test_policy_down')
@mock_autoscaling
@mock_ec2
def test_detach_one_instance_decrement():
client = boto3.client('autoscaling', region_name='us-east-1')
_ = client.create_launch_configuration(
LaunchConfigurationName='test_launch_configuration'
)
client.create_auto_scaling_group(
AutoScalingGroupName='test_asg',
LaunchConfigurationName='test_launch_configuration',
MinSize=0,
MaxSize=2,
DesiredCapacity=2,
Tags=[
{'ResourceId': 'test_asg',
'ResourceType': 'auto-scaling-group',
'Key': 'propogated-tag-key',
'Value': 'propogate-tag-value',
'PropagateAtLaunch': True
}]
)
response = client.describe_auto_scaling_groups(
AutoScalingGroupNames=['test_asg']
)
instance_to_detach = response['AutoScalingGroups'][0]['Instances'][0]['InstanceId']
instance_to_keep = response['AutoScalingGroups'][0]['Instances'][1]['InstanceId']
ec2_client = boto3.client('ec2', region_name='us-east-1')
response = ec2_client.describe_instances(InstanceIds=[instance_to_detach])
response = client.detach_instances(
AutoScalingGroupName='test_asg',
InstanceIds=[instance_to_detach],
ShouldDecrementDesiredCapacity=True
)
response['ResponseMetadata']['HTTPStatusCode'].should.equal(200)
response = client.describe_auto_scaling_groups(
AutoScalingGroupNames=['test_asg']
)
response['AutoScalingGroups'][0]['Instances'].should.have.length_of(1)
# test to ensure tag has been removed
response = ec2_client.describe_instances(InstanceIds=[instance_to_detach])
tags = response['Reservations'][0]['Instances'][0]['Tags']
tags.should.have.length_of(1)
# test to ensure tag is present on other instance
response = ec2_client.describe_instances(InstanceIds=[instance_to_keep])
tags = response['Reservations'][0]['Instances'][0]['Tags']
tags.should.have.length_of(2)
@mock_autoscaling
@mock_ec2
def test_detach_one_instance():
client = boto3.client('autoscaling', region_name='us-east-1')
_ = client.create_launch_configuration(
LaunchConfigurationName='test_launch_configuration'
)
client.create_auto_scaling_group(
AutoScalingGroupName='test_asg',
LaunchConfigurationName='test_launch_configuration',
MinSize=0,
MaxSize=2,
DesiredCapacity=2,
Tags=[
{'ResourceId': 'test_asg',
'ResourceType': 'auto-scaling-group',
'Key': 'propogated-tag-key',
'Value': 'propogate-tag-value',
'PropagateAtLaunch': True
}]
)
response = client.describe_auto_scaling_groups(
AutoScalingGroupNames=['test_asg']
)
instance_to_detach = response['AutoScalingGroups'][0]['Instances'][0]['InstanceId']
instance_to_keep = response['AutoScalingGroups'][0]['Instances'][1]['InstanceId']
ec2_client = boto3.client('ec2', region_name='us-east-1')
response = ec2_client.describe_instances(InstanceIds=[instance_to_detach])
response = client.detach_instances(
AutoScalingGroupName='test_asg',
InstanceIds=[instance_to_detach],
ShouldDecrementDesiredCapacity=False
)
response['ResponseMetadata']['HTTPStatusCode'].should.equal(200)
response = client.describe_auto_scaling_groups(
AutoScalingGroupNames=['test_asg']
)
# test to ensure instance was replaced
response['AutoScalingGroups'][0]['Instances'].should.have.length_of(2)
response = ec2_client.describe_instances(InstanceIds=[instance_to_detach])
tags = response['Reservations'][0]['Instances'][0]['Tags']
tags.should.have.length_of(1)
response = ec2_client.describe_instances(InstanceIds=[instance_to_keep])
tags = response['Reservations'][0]['Instances'][0]['Tags']
tags.should.have.length_of(2)
@mock_autoscaling
@mock_ec2
def test_attach_one_instance():
client = boto3.client('autoscaling', region_name='us-east-1')
_ = client.create_launch_configuration(
LaunchConfigurationName='test_launch_configuration'
)
client.create_auto_scaling_group(
AutoScalingGroupName='test_asg',
LaunchConfigurationName='test_launch_configuration',
MinSize=0,
MaxSize=4,
DesiredCapacity=2,
Tags=[
{'ResourceId': 'test_asg',
'ResourceType': 'auto-scaling-group',
'Key': 'propogated-tag-key',
'Value': 'propogate-tag-value',
'PropagateAtLaunch': True
}]
)
response = client.describe_auto_scaling_groups(
AutoScalingGroupNames=['test_asg']
)
ec2 = boto3.resource('ec2', 'us-east-1')
instances_to_add = [x.id for x in ec2.create_instances(ImageId='', MinCount=1, MaxCount=1)]
response = client.attach_instances(
AutoScalingGroupName='test_asg',
InstanceIds=instances_to_add
)
response['ResponseMetadata']['HTTPStatusCode'].should.equal(200)
response = client.describe_auto_scaling_groups(
AutoScalingGroupNames=['test_asg']
)
response['AutoScalingGroups'][0]['Instances'].should.have.length_of(3)
@mock_autoscaling
@mock_ec2
def test_describe_instance_health():
client = boto3.client('autoscaling', region_name='us-east-1')
_ = client.create_launch_configuration(
LaunchConfigurationName='test_launch_configuration'
)
client.create_auto_scaling_group(
AutoScalingGroupName='test_asg',
LaunchConfigurationName='test_launch_configuration',
MinSize=2,
MaxSize=4,
DesiredCapacity=2,
)
response = client.describe_auto_scaling_groups(
AutoScalingGroupNames=['test_asg']
)
instance1 = response['AutoScalingGroups'][0]['Instances'][0]
instance1['HealthStatus'].should.equal('Healthy')
@mock_autoscaling
@mock_ec2
def test_set_instance_health():
client = boto3.client('autoscaling', region_name='us-east-1')
_ = client.create_launch_configuration(
LaunchConfigurationName='test_launch_configuration'
)
client.create_auto_scaling_group(
AutoScalingGroupName='test_asg',
LaunchConfigurationName='test_launch_configuration',
MinSize=2,
MaxSize=4,
DesiredCapacity=2,
)
response = client.describe_auto_scaling_groups(
AutoScalingGroupNames=['test_asg']
)
instance1 = response['AutoScalingGroups'][0]['Instances'][0]
instance1['HealthStatus'].should.equal('Healthy')
client.set_instance_health(InstanceId=instance1['InstanceId'], HealthStatus='Unhealthy')
response = client.describe_auto_scaling_groups(
AutoScalingGroupNames=['test_asg']
)
instance1 = response['AutoScalingGroups'][0]['Instances'][0]
instance1['HealthStatus'].should.equal('Unhealthy')

View File

@ -0,0 +1,131 @@
from __future__ import unicode_literals
import boto3
from moto import mock_autoscaling, mock_ec2, mock_elbv2
@mock_elbv2
@mock_ec2
@mock_autoscaling
def test_attach_detach_target_groups():
INSTANCE_COUNT = 2
client = boto3.client('autoscaling', region_name='us-east-1')
elbv2_client = boto3.client('elbv2', region_name='us-east-1')
ec2 = boto3.resource('ec2', region_name='us-east-1')
vpc = ec2.create_vpc(CidrBlock='172.28.7.0/24', InstanceTenancy='default')
response = elbv2_client.create_target_group(
Name='a-target',
Protocol='HTTP',
Port=8080,
VpcId=vpc.id,
HealthCheckProtocol='HTTP',
HealthCheckPort='8080',
HealthCheckPath='/',
HealthCheckIntervalSeconds=5,
HealthCheckTimeoutSeconds=5,
HealthyThresholdCount=5,
UnhealthyThresholdCount=2,
Matcher={'HttpCode': '200'})
target_group_arn = response['TargetGroups'][0]['TargetGroupArn']
client.create_launch_configuration(
LaunchConfigurationName='test_launch_configuration')
# create asg, attach to target group on create
client.create_auto_scaling_group(
AutoScalingGroupName='test_asg',
LaunchConfigurationName='test_launch_configuration',
MinSize=0,
MaxSize=INSTANCE_COUNT,
DesiredCapacity=INSTANCE_COUNT,
TargetGroupARNs=[target_group_arn],
VPCZoneIdentifier=vpc.id)
# create asg without attaching to target group
client.create_auto_scaling_group(
AutoScalingGroupName='test_asg2',
LaunchConfigurationName='test_launch_configuration',
MinSize=0,
MaxSize=INSTANCE_COUNT,
DesiredCapacity=INSTANCE_COUNT,
VPCZoneIdentifier=vpc.id)
response = client.describe_load_balancer_target_groups(
AutoScalingGroupName='test_asg')
list(response['LoadBalancerTargetGroups']).should.have.length_of(1)
response = elbv2_client.describe_target_health(
TargetGroupArn=target_group_arn)
list(response['TargetHealthDescriptions']).should.have.length_of(INSTANCE_COUNT)
client.attach_load_balancer_target_groups(
AutoScalingGroupName='test_asg2',
TargetGroupARNs=[target_group_arn])
response = elbv2_client.describe_target_health(
TargetGroupArn=target_group_arn)
list(response['TargetHealthDescriptions']).should.have.length_of(INSTANCE_COUNT * 2)
response = client.detach_load_balancer_target_groups(
AutoScalingGroupName='test_asg2',
TargetGroupARNs=[target_group_arn])
response = elbv2_client.describe_target_health(
TargetGroupArn=target_group_arn)
list(response['TargetHealthDescriptions']).should.have.length_of(INSTANCE_COUNT)
@mock_elbv2
@mock_ec2
@mock_autoscaling
def test_detach_all_target_groups():
INSTANCE_COUNT = 2
client = boto3.client('autoscaling', region_name='us-east-1')
elbv2_client = boto3.client('elbv2', region_name='us-east-1')
ec2 = boto3.resource('ec2', region_name='us-east-1')
vpc = ec2.create_vpc(CidrBlock='172.28.7.0/24', InstanceTenancy='default')
response = elbv2_client.create_target_group(
Name='a-target',
Protocol='HTTP',
Port=8080,
VpcId=vpc.id,
HealthCheckProtocol='HTTP',
HealthCheckPort='8080',
HealthCheckPath='/',
HealthCheckIntervalSeconds=5,
HealthCheckTimeoutSeconds=5,
HealthyThresholdCount=5,
UnhealthyThresholdCount=2,
Matcher={'HttpCode': '200'})
target_group_arn = response['TargetGroups'][0]['TargetGroupArn']
client.create_launch_configuration(
LaunchConfigurationName='test_launch_configuration')
client.create_auto_scaling_group(
AutoScalingGroupName='test_asg',
LaunchConfigurationName='test_launch_configuration',
MinSize=0,
MaxSize=INSTANCE_COUNT,
DesiredCapacity=INSTANCE_COUNT,
TargetGroupARNs=[target_group_arn],
VPCZoneIdentifier=vpc.id)
response = client.describe_load_balancer_target_groups(
AutoScalingGroupName='test_asg')
list(response['LoadBalancerTargetGroups']).should.have.length_of(1)
response = elbv2_client.describe_target_health(
TargetGroupArn=target_group_arn)
list(response['TargetHealthDescriptions']).should.have.length_of(INSTANCE_COUNT)
response = client.detach_load_balancer_target_groups(
AutoScalingGroupName='test_asg',
TargetGroupARNs=[target_group_arn])
response = elbv2_client.describe_target_health(
TargetGroupArn=target_group_arn)
list(response['TargetHealthDescriptions']).should.have.length_of(0)
response = client.describe_load_balancer_target_groups(
AutoScalingGroupName='test_asg')
list(response['LoadBalancerTargetGroups']).should.have.length_of(0)

View File

@ -12,7 +12,7 @@ import sure # noqa
from freezegun import freeze_time
from moto import mock_lambda, mock_s3, mock_ec2, settings
_lambda_region = 'us-east-1' if settings.TEST_SERVER_MODE else 'us-west-2'
_lambda_region = 'us-west-2'
def _process_lambda(func_str):
@ -220,7 +220,7 @@ def test_create_function_from_aws_bucket():
result.pop('LastModified')
result.should.equal({
'FunctionName': 'testFunction',
'FunctionArn': 'arn:aws:lambda:{}:123456789012:function:testFunction'.format(_lambda_region),
'FunctionArn': 'arn:aws:lambda:{}:123456789012:function:testFunction:$LATEST'.format(_lambda_region),
'Runtime': 'python2.7',
'Role': 'test-iam-role',
'Handler': 'lambda_function.lambda_handler',
@ -265,7 +265,7 @@ def test_create_function_from_zipfile():
result.should.equal({
'FunctionName': 'testFunction',
'FunctionArn': 'arn:aws:lambda:{}:123456789012:function:testFunction'.format(_lambda_region),
'FunctionArn': 'arn:aws:lambda:{}:123456789012:function:testFunction:$LATEST'.format(_lambda_region),
'Runtime': 'python2.7',
'Role': 'test-iam-role',
'Handler': 'lambda_function.lambda_handler',
@ -317,30 +317,25 @@ def test_get_function():
result['ResponseMetadata'].pop('RetryAttempts', None)
result['Configuration'].pop('LastModified')
result.should.equal({
"Code": {
"Location": "s3://awslambda-{0}-tasks.s3-{0}.amazonaws.com/test.zip".format(_lambda_region),
"RepositoryType": "S3"
},
"Configuration": {
"CodeSha256": hashlib.sha256(zip_content).hexdigest(),
"CodeSize": len(zip_content),
"Description": "test lambda function",
"FunctionArn": 'arn:aws:lambda:{}:123456789012:function:testFunction'.format(_lambda_region),
"FunctionName": "testFunction",
"Handler": "lambda_function.lambda_handler",
"MemorySize": 128,
"Role": "test-iam-role",
"Runtime": "python2.7",
"Timeout": 3,
"Version": '$LATEST',
"VpcConfig": {
"SecurityGroupIds": [],
"SubnetIds": [],
}
},
'ResponseMetadata': {'HTTPStatusCode': 200},
})
result['Code']['Location'].should.equal('s3://awslambda-{0}-tasks.s3-{0}.amazonaws.com/test.zip'.format(_lambda_region))
result['Code']['RepositoryType'].should.equal('S3')
result['Configuration']['CodeSha256'].should.equal(hashlib.sha256(zip_content).hexdigest())
result['Configuration']['CodeSize'].should.equal(len(zip_content))
result['Configuration']['Description'].should.equal('test lambda function')
result['Configuration'].should.contain('FunctionArn')
result['Configuration']['FunctionName'].should.equal('testFunction')
result['Configuration']['Handler'].should.equal('lambda_function.lambda_handler')
result['Configuration']['MemorySize'].should.equal(128)
result['Configuration']['Role'].should.equal('test-iam-role')
result['Configuration']['Runtime'].should.equal('python2.7')
result['Configuration']['Timeout'].should.equal(3)
result['Configuration']['Version'].should.equal('$LATEST')
result['Configuration'].should.contain('VpcConfig')
# Test get function with
result = conn.get_function(FunctionName='testFunction', Qualifier='$LATEST')
result['Configuration']['Version'].should.equal('$LATEST')
@mock_lambda
@ -380,6 +375,52 @@ def test_delete_function():
FunctionName='testFunctionThatDoesntExist').should.throw(botocore.client.ClientError)
@mock_lambda
@mock_s3
def test_publish():
s3_conn = boto3.client('s3', 'us-west-2')
s3_conn.create_bucket(Bucket='test-bucket')
zip_content = get_test_zip_file2()
s3_conn.put_object(Bucket='test-bucket', Key='test.zip', Body=zip_content)
conn = boto3.client('lambda', 'us-west-2')
conn.create_function(
FunctionName='testFunction',
Runtime='python2.7',
Role='test-iam-role',
Handler='lambda_function.lambda_handler',
Code={
'S3Bucket': 'test-bucket',
'S3Key': 'test.zip',
},
Description='test lambda function',
Timeout=3,
MemorySize=128,
Publish=True,
)
function_list = conn.list_functions()
function_list['Functions'].should.have.length_of(1)
latest_arn = function_list['Functions'][0]['FunctionArn']
conn.publish_version(FunctionName='testFunction')
function_list = conn.list_functions()
function_list['Functions'].should.have.length_of(2)
# #SetComprehension ;-)
published_arn = list({f['FunctionArn'] for f in function_list['Functions']} - {latest_arn})[0]
published_arn.should.contain('testFunction:1')
conn.delete_function(FunctionName='testFunction', Qualifier='1')
function_list = conn.list_functions()
function_list['Functions'].should.have.length_of(1)
function_list['Functions'][0]['FunctionArn'].should.contain('testFunction:$LATEST')
@mock_lambda
@mock_s3
@freeze_time('2015-01-01 00:00:00')
@ -420,7 +461,7 @@ def test_list_create_list_get_delete_list():
"CodeSha256": hashlib.sha256(zip_content).hexdigest(),
"CodeSize": len(zip_content),
"Description": "test lambda function",
"FunctionArn": 'arn:aws:lambda:{}:123456789012:function:testFunction'.format(_lambda_region),
"FunctionArn": 'arn:aws:lambda:{}:123456789012:function:testFunction:$LATEST'.format(_lambda_region),
"FunctionName": "testFunction",
"Handler": "lambda_function.lambda_handler",
"MemorySize": 128,
@ -488,6 +529,7 @@ def lambda_handler(event, context):
assert 'FunctionError' in result
assert result['FunctionError'] == 'Handled'
@mock_lambda
@mock_s3
def test_tags():
@ -554,6 +596,7 @@ def test_tags():
TagKeys=['spam']
)['ResponseMetadata']['HTTPStatusCode'].should.equal(204)
@mock_lambda
def test_tags_not_found():
"""
@ -574,6 +617,7 @@ def test_tags_not_found():
TagKeys=['spam']
).should.throw(botocore.client.ClientError)
@mock_lambda
def test_invoke_async_function():
conn = boto3.client('lambda', 'us-west-2')
@ -581,10 +625,8 @@ def test_invoke_async_function():
FunctionName='testFunction',
Runtime='python2.7',
Role='test-iam-role',
Handler='lambda_function.handler',
Code={
'ZipFile': get_test_zip_file1(),
},
Handler='lambda_function.lambda_handler',
Code={'ZipFile': get_test_zip_file1()},
Description='test lambda function',
Timeout=3,
MemorySize=128,
@ -593,11 +635,12 @@ def test_invoke_async_function():
success_result = conn.invoke_async(
FunctionName='testFunction',
InvokeArgs=json.dumps({ 'test': 'event' })
InvokeArgs=json.dumps({'test': 'event'})
)
success_result['Status'].should.equal(202)
@mock_lambda
@freeze_time('2015-01-01 00:00:00')
def test_get_function_created_with_zipfile():
@ -631,7 +674,7 @@ def test_get_function_created_with_zipfile():
"CodeSha256": hashlib.sha256(zip_content).hexdigest(),
"CodeSize": len(zip_content),
"Description": "test lambda function",
"FunctionArn":'arn:aws:lambda:{}:123456789012:function:testFunction'.format(_lambda_region),
"FunctionArn":'arn:aws:lambda:{}:123456789012:function:testFunction:$LATEST'.format(_lambda_region),
"FunctionName": "testFunction",
"Handler": "lambda_function.handler",
"MemorySize": 128,
@ -646,6 +689,7 @@ def test_get_function_created_with_zipfile():
},
)
@mock_lambda
def add_function_permission():
conn = boto3.client('lambda', 'us-west-2')

View File

@ -0,0 +1,809 @@
from __future__ import unicode_literals
import time
import datetime
import boto3
from botocore.exceptions import ClientError
import sure # noqa
from moto import mock_batch, mock_iam, mock_ec2, mock_ecs, mock_logs
import functools
import nose
def expected_failure(test):
@functools.wraps(test)
def inner(*args, **kwargs):
try:
test(*args, **kwargs)
except Exception as err:
raise nose.SkipTest
return inner
DEFAULT_REGION = 'eu-central-1'
def _get_clients():
return boto3.client('ec2', region_name=DEFAULT_REGION), \
boto3.client('iam', region_name=DEFAULT_REGION), \
boto3.client('ecs', region_name=DEFAULT_REGION), \
boto3.client('logs', region_name=DEFAULT_REGION), \
boto3.client('batch', region_name=DEFAULT_REGION)
def _setup(ec2_client, iam_client):
"""
Do prerequisite setup
:return: VPC ID, Subnet ID, Security group ID, IAM Role ARN
:rtype: tuple
"""
resp = ec2_client.create_vpc(CidrBlock='172.30.0.0/24')
vpc_id = resp['Vpc']['VpcId']
resp = ec2_client.create_subnet(
AvailabilityZone='eu-central-1a',
CidrBlock='172.30.0.0/25',
VpcId=vpc_id
)
subnet_id = resp['Subnet']['SubnetId']
resp = ec2_client.create_security_group(
Description='test_sg_desc',
GroupName='test_sg',
VpcId=vpc_id
)
sg_id = resp['GroupId']
resp = iam_client.create_role(
RoleName='TestRole',
AssumeRolePolicyDocument='some_policy'
)
iam_arn = resp['Role']['Arn']
return vpc_id, subnet_id, sg_id, iam_arn
# Yes, yes it talks to all the things
@mock_ec2
@mock_ecs
@mock_iam
@mock_batch
def test_create_managed_compute_environment():
ec2_client, iam_client, ecs_client, logs_client, batch_client = _get_clients()
vpc_id, subnet_id, sg_id, iam_arn = _setup(ec2_client, iam_client)
compute_name = 'test_compute_env'
resp = batch_client.create_compute_environment(
computeEnvironmentName=compute_name,
type='MANAGED',
state='ENABLED',
computeResources={
'type': 'EC2',
'minvCpus': 5,
'maxvCpus': 10,
'desiredvCpus': 5,
'instanceTypes': [
't2.small',
't2.medium'
],
'imageId': 'some_image_id',
'subnets': [
subnet_id,
],
'securityGroupIds': [
sg_id,
],
'ec2KeyPair': 'string',
'instanceRole': iam_arn,
'tags': {
'string': 'string'
},
'bidPercentage': 123,
'spotIamFleetRole': 'string'
},
serviceRole=iam_arn
)
resp.should.contain('computeEnvironmentArn')
resp['computeEnvironmentName'].should.equal(compute_name)
# Given a t2.medium is 2 vcpu and t2.small is 1, therefore 2 mediums and 1 small should be created
resp = ec2_client.describe_instances()
resp.should.contain('Reservations')
len(resp['Reservations']).should.equal(3)
# Should have created 1 ECS cluster
resp = ecs_client.list_clusters()
resp.should.contain('clusterArns')
len(resp['clusterArns']).should.equal(1)
@mock_ec2
@mock_ecs
@mock_iam
@mock_batch
def test_create_unmanaged_compute_environment():
ec2_client, iam_client, ecs_client, logs_client, batch_client = _get_clients()
vpc_id, subnet_id, sg_id, iam_arn = _setup(ec2_client, iam_client)
compute_name = 'test_compute_env'
resp = batch_client.create_compute_environment(
computeEnvironmentName=compute_name,
type='UNMANAGED',
state='ENABLED',
serviceRole=iam_arn
)
resp.should.contain('computeEnvironmentArn')
resp['computeEnvironmentName'].should.equal(compute_name)
# Its unmanaged so no instances should be created
resp = ec2_client.describe_instances()
resp.should.contain('Reservations')
len(resp['Reservations']).should.equal(0)
# Should have created 1 ECS cluster
resp = ecs_client.list_clusters()
resp.should.contain('clusterArns')
len(resp['clusterArns']).should.equal(1)
# TODO create 1000s of tests to test complex option combinations of create environment
@mock_ec2
@mock_ecs
@mock_iam
@mock_batch
def test_describe_compute_environment():
ec2_client, iam_client, ecs_client, logs_client, batch_client = _get_clients()
vpc_id, subnet_id, sg_id, iam_arn = _setup(ec2_client, iam_client)
compute_name = 'test_compute_env'
batch_client.create_compute_environment(
computeEnvironmentName=compute_name,
type='UNMANAGED',
state='ENABLED',
serviceRole=iam_arn
)
resp = batch_client.describe_compute_environments()
len(resp['computeEnvironments']).should.equal(1)
resp['computeEnvironments'][0]['computeEnvironmentName'].should.equal(compute_name)
# Test filtering
resp = batch_client.describe_compute_environments(
computeEnvironments=['test1']
)
len(resp['computeEnvironments']).should.equal(0)
@mock_ec2
@mock_ecs
@mock_iam
@mock_batch
def test_delete_unmanaged_compute_environment():
ec2_client, iam_client, ecs_client, logs_client, batch_client = _get_clients()
vpc_id, subnet_id, sg_id, iam_arn = _setup(ec2_client, iam_client)
compute_name = 'test_compute_env'
batch_client.create_compute_environment(
computeEnvironmentName=compute_name,
type='UNMANAGED',
state='ENABLED',
serviceRole=iam_arn
)
batch_client.delete_compute_environment(
computeEnvironment=compute_name,
)
resp = batch_client.describe_compute_environments()
len(resp['computeEnvironments']).should.equal(0)
resp = ecs_client.list_clusters()
len(resp.get('clusterArns', [])).should.equal(0)
@mock_ec2
@mock_ecs
@mock_iam
@mock_batch
def test_delete_managed_compute_environment():
ec2_client, iam_client, ecs_client, logs_client, batch_client = _get_clients()
vpc_id, subnet_id, sg_id, iam_arn = _setup(ec2_client, iam_client)
compute_name = 'test_compute_env'
batch_client.create_compute_environment(
computeEnvironmentName=compute_name,
type='MANAGED',
state='ENABLED',
computeResources={
'type': 'EC2',
'minvCpus': 5,
'maxvCpus': 10,
'desiredvCpus': 5,
'instanceTypes': [
't2.small',
't2.medium'
],
'imageId': 'some_image_id',
'subnets': [
subnet_id,
],
'securityGroupIds': [
sg_id,
],
'ec2KeyPair': 'string',
'instanceRole': iam_arn,
'tags': {
'string': 'string'
},
'bidPercentage': 123,
'spotIamFleetRole': 'string'
},
serviceRole=iam_arn
)
batch_client.delete_compute_environment(
computeEnvironment=compute_name,
)
resp = batch_client.describe_compute_environments()
len(resp['computeEnvironments']).should.equal(0)
resp = ec2_client.describe_instances()
resp.should.contain('Reservations')
len(resp['Reservations']).should.equal(3)
for reservation in resp['Reservations']:
reservation['Instances'][0]['State']['Name'].should.equal('terminated')
resp = ecs_client.list_clusters()
len(resp.get('clusterArns', [])).should.equal(0)
@mock_ec2
@mock_ecs
@mock_iam
@mock_batch
def test_update_unmanaged_compute_environment_state():
ec2_client, iam_client, ecs_client, logs_client, batch_client = _get_clients()
vpc_id, subnet_id, sg_id, iam_arn = _setup(ec2_client, iam_client)
compute_name = 'test_compute_env'
batch_client.create_compute_environment(
computeEnvironmentName=compute_name,
type='UNMANAGED',
state='ENABLED',
serviceRole=iam_arn
)
batch_client.update_compute_environment(
computeEnvironment=compute_name,
state='DISABLED'
)
resp = batch_client.describe_compute_environments()
len(resp['computeEnvironments']).should.equal(1)
resp['computeEnvironments'][0]['state'].should.equal('DISABLED')
@mock_ec2
@mock_ecs
@mock_iam
@mock_batch
def test_create_job_queue():
ec2_client, iam_client, ecs_client, logs_client, batch_client = _get_clients()
vpc_id, subnet_id, sg_id, iam_arn = _setup(ec2_client, iam_client)
compute_name = 'test_compute_env'
resp = batch_client.create_compute_environment(
computeEnvironmentName=compute_name,
type='UNMANAGED',
state='ENABLED',
serviceRole=iam_arn
)
arn = resp['computeEnvironmentArn']
resp = batch_client.create_job_queue(
jobQueueName='test_job_queue',
state='ENABLED',
priority=123,
computeEnvironmentOrder=[
{
'order': 123,
'computeEnvironment': arn
},
]
)
resp.should.contain('jobQueueArn')
resp.should.contain('jobQueueName')
queue_arn = resp['jobQueueArn']
resp = batch_client.describe_job_queues()
resp.should.contain('jobQueues')
len(resp['jobQueues']).should.equal(1)
resp['jobQueues'][0]['jobQueueArn'].should.equal(queue_arn)
resp = batch_client.describe_job_queues(jobQueues=['test_invalid_queue'])
resp.should.contain('jobQueues')
len(resp['jobQueues']).should.equal(0)
@mock_ec2
@mock_ecs
@mock_iam
@mock_batch
def test_job_queue_bad_arn():
ec2_client, iam_client, ecs_client, logs_client, batch_client = _get_clients()
vpc_id, subnet_id, sg_id, iam_arn = _setup(ec2_client, iam_client)
compute_name = 'test_compute_env'
resp = batch_client.create_compute_environment(
computeEnvironmentName=compute_name,
type='UNMANAGED',
state='ENABLED',
serviceRole=iam_arn
)
arn = resp['computeEnvironmentArn']
try:
batch_client.create_job_queue(
jobQueueName='test_job_queue',
state='ENABLED',
priority=123,
computeEnvironmentOrder=[
{
'order': 123,
'computeEnvironment': arn + 'LALALA'
},
]
)
except ClientError as err:
err.response['Error']['Code'].should.equal('ClientException')
@mock_ec2
@mock_ecs
@mock_iam
@mock_batch
def test_update_job_queue():
ec2_client, iam_client, ecs_client, logs_client, batch_client = _get_clients()
vpc_id, subnet_id, sg_id, iam_arn = _setup(ec2_client, iam_client)
compute_name = 'test_compute_env'
resp = batch_client.create_compute_environment(
computeEnvironmentName=compute_name,
type='UNMANAGED',
state='ENABLED',
serviceRole=iam_arn
)
arn = resp['computeEnvironmentArn']
resp = batch_client.create_job_queue(
jobQueueName='test_job_queue',
state='ENABLED',
priority=123,
computeEnvironmentOrder=[
{
'order': 123,
'computeEnvironment': arn
},
]
)
queue_arn = resp['jobQueueArn']
batch_client.update_job_queue(
jobQueue=queue_arn,
priority=5
)
resp = batch_client.describe_job_queues()
resp.should.contain('jobQueues')
len(resp['jobQueues']).should.equal(1)
resp['jobQueues'][0]['priority'].should.equal(5)
@mock_ec2
@mock_ecs
@mock_iam
@mock_batch
def test_update_job_queue():
ec2_client, iam_client, ecs_client, logs_client, batch_client = _get_clients()
vpc_id, subnet_id, sg_id, iam_arn = _setup(ec2_client, iam_client)
compute_name = 'test_compute_env'
resp = batch_client.create_compute_environment(
computeEnvironmentName=compute_name,
type='UNMANAGED',
state='ENABLED',
serviceRole=iam_arn
)
arn = resp['computeEnvironmentArn']
resp = batch_client.create_job_queue(
jobQueueName='test_job_queue',
state='ENABLED',
priority=123,
computeEnvironmentOrder=[
{
'order': 123,
'computeEnvironment': arn
},
]
)
queue_arn = resp['jobQueueArn']
batch_client.delete_job_queue(
jobQueue=queue_arn
)
resp = batch_client.describe_job_queues()
resp.should.contain('jobQueues')
len(resp['jobQueues']).should.equal(0)
@mock_ec2
@mock_ecs
@mock_iam
@mock_batch
def test_register_task_definition():
ec2_client, iam_client, ecs_client, logs_client, batch_client = _get_clients()
vpc_id, subnet_id, sg_id, iam_arn = _setup(ec2_client, iam_client)
resp = batch_client.register_job_definition(
jobDefinitionName='sleep10',
type='container',
containerProperties={
'image': 'busybox',
'vcpus': 1,
'memory': 128,
'command': ['sleep', '10']
}
)
resp.should.contain('jobDefinitionArn')
resp.should.contain('jobDefinitionName')
resp.should.contain('revision')
assert resp['jobDefinitionArn'].endswith('{0}:{1}'.format(resp['jobDefinitionName'], resp['revision']))
@mock_ec2
@mock_ecs
@mock_iam
@mock_batch
def test_reregister_task_definition():
# Reregistering task with the same name bumps the revision number
ec2_client, iam_client, ecs_client, logs_client, batch_client = _get_clients()
vpc_id, subnet_id, sg_id, iam_arn = _setup(ec2_client, iam_client)
resp1 = batch_client.register_job_definition(
jobDefinitionName='sleep10',
type='container',
containerProperties={
'image': 'busybox',
'vcpus': 1,
'memory': 128,
'command': ['sleep', '10']
}
)
resp1.should.contain('jobDefinitionArn')
resp1.should.contain('jobDefinitionName')
resp1.should.contain('revision')
assert resp1['jobDefinitionArn'].endswith('{0}:{1}'.format(resp1['jobDefinitionName'], resp1['revision']))
resp1['revision'].should.equal(1)
resp2 = batch_client.register_job_definition(
jobDefinitionName='sleep10',
type='container',
containerProperties={
'image': 'busybox',
'vcpus': 1,
'memory': 68,
'command': ['sleep', '10']
}
)
resp2['revision'].should.equal(2)
resp2['jobDefinitionArn'].should_not.equal(resp1['jobDefinitionArn'])
@mock_ec2
@mock_ecs
@mock_iam
@mock_batch
def test_delete_task_definition():
ec2_client, iam_client, ecs_client, logs_client, batch_client = _get_clients()
vpc_id, subnet_id, sg_id, iam_arn = _setup(ec2_client, iam_client)
resp = batch_client.register_job_definition(
jobDefinitionName='sleep10',
type='container',
containerProperties={
'image': 'busybox',
'vcpus': 1,
'memory': 128,
'command': ['sleep', '10']
}
)
batch_client.deregister_job_definition(jobDefinition=resp['jobDefinitionArn'])
resp = batch_client.describe_job_definitions()
len(resp['jobDefinitions']).should.equal(0)
@mock_ec2
@mock_ecs
@mock_iam
@mock_batch
def test_describe_task_definition():
ec2_client, iam_client, ecs_client, logs_client, batch_client = _get_clients()
vpc_id, subnet_id, sg_id, iam_arn = _setup(ec2_client, iam_client)
batch_client.register_job_definition(
jobDefinitionName='sleep10',
type='container',
containerProperties={
'image': 'busybox',
'vcpus': 1,
'memory': 128,
'command': ['sleep', '10']
}
)
batch_client.register_job_definition(
jobDefinitionName='sleep10',
type='container',
containerProperties={
'image': 'busybox',
'vcpus': 1,
'memory': 64,
'command': ['sleep', '10']
}
)
batch_client.register_job_definition(
jobDefinitionName='test1',
type='container',
containerProperties={
'image': 'busybox',
'vcpus': 1,
'memory': 64,
'command': ['sleep', '10']
}
)
resp = batch_client.describe_job_definitions(
jobDefinitionName='sleep10'
)
len(resp['jobDefinitions']).should.equal(2)
resp = batch_client.describe_job_definitions()
len(resp['jobDefinitions']).should.equal(3)
resp = batch_client.describe_job_definitions(
jobDefinitions=['sleep10', 'test1']
)
len(resp['jobDefinitions']).should.equal(3)
# SLOW TESTS
@expected_failure
@mock_logs
@mock_ec2
@mock_ecs
@mock_iam
@mock_batch
def test_submit_job():
ec2_client, iam_client, ecs_client, logs_client, batch_client = _get_clients()
vpc_id, subnet_id, sg_id, iam_arn = _setup(ec2_client, iam_client)
compute_name = 'test_compute_env'
resp = batch_client.create_compute_environment(
computeEnvironmentName=compute_name,
type='UNMANAGED',
state='ENABLED',
serviceRole=iam_arn
)
arn = resp['computeEnvironmentArn']
resp = batch_client.create_job_queue(
jobQueueName='test_job_queue',
state='ENABLED',
priority=123,
computeEnvironmentOrder=[
{
'order': 123,
'computeEnvironment': arn
},
]
)
queue_arn = resp['jobQueueArn']
resp = batch_client.register_job_definition(
jobDefinitionName='sleep10',
type='container',
containerProperties={
'image': 'busybox',
'vcpus': 1,
'memory': 128,
'command': ['sleep', '10']
}
)
job_def_arn = resp['jobDefinitionArn']
resp = batch_client.submit_job(
jobName='test1',
jobQueue=queue_arn,
jobDefinition=job_def_arn
)
job_id = resp['jobId']
future = datetime.datetime.now() + datetime.timedelta(seconds=30)
while datetime.datetime.now() < future:
resp = batch_client.describe_jobs(jobs=[job_id])
print("{0}:{1} {2}".format(resp['jobs'][0]['jobName'], resp['jobs'][0]['jobId'], resp['jobs'][0]['status']))
if resp['jobs'][0]['status'] == 'FAILED':
raise RuntimeError('Batch job failed')
if resp['jobs'][0]['status'] == 'SUCCEEDED':
break
time.sleep(0.5)
else:
raise RuntimeError('Batch job timed out')
resp = logs_client.describe_log_streams(logGroupName='/aws/batch/job')
len(resp['logStreams']).should.equal(1)
ls_name = resp['logStreams'][0]['logStreamName']
resp = logs_client.get_log_events(logGroupName='/aws/batch/job', logStreamName=ls_name)
len(resp['events']).should.be.greater_than(5)
@expected_failure
@mock_logs
@mock_ec2
@mock_ecs
@mock_iam
@mock_batch
def test_list_jobs():
ec2_client, iam_client, ecs_client, logs_client, batch_client = _get_clients()
vpc_id, subnet_id, sg_id, iam_arn = _setup(ec2_client, iam_client)
compute_name = 'test_compute_env'
resp = batch_client.create_compute_environment(
computeEnvironmentName=compute_name,
type='UNMANAGED',
state='ENABLED',
serviceRole=iam_arn
)
arn = resp['computeEnvironmentArn']
resp = batch_client.create_job_queue(
jobQueueName='test_job_queue',
state='ENABLED',
priority=123,
computeEnvironmentOrder=[
{
'order': 123,
'computeEnvironment': arn
},
]
)
queue_arn = resp['jobQueueArn']
resp = batch_client.register_job_definition(
jobDefinitionName='sleep10',
type='container',
containerProperties={
'image': 'busybox',
'vcpus': 1,
'memory': 128,
'command': ['sleep', '10']
}
)
job_def_arn = resp['jobDefinitionArn']
resp = batch_client.submit_job(
jobName='test1',
jobQueue=queue_arn,
jobDefinition=job_def_arn
)
job_id1 = resp['jobId']
resp = batch_client.submit_job(
jobName='test2',
jobQueue=queue_arn,
jobDefinition=job_def_arn
)
job_id2 = resp['jobId']
future = datetime.datetime.now() + datetime.timedelta(seconds=30)
resp_finished_jobs = batch_client.list_jobs(
jobQueue=queue_arn,
jobStatus='SUCCEEDED'
)
# Wait only as long as it takes to run the jobs
while datetime.datetime.now() < future:
resp = batch_client.describe_jobs(jobs=[job_id1, job_id2])
any_failed_jobs = any([job['status'] == 'FAILED' for job in resp['jobs']])
succeeded_jobs = all([job['status'] == 'SUCCEEDED' for job in resp['jobs']])
if any_failed_jobs:
raise RuntimeError('A Batch job failed')
if succeeded_jobs:
break
time.sleep(0.5)
else:
raise RuntimeError('Batch jobs timed out')
resp_finished_jobs2 = batch_client.list_jobs(
jobQueue=queue_arn,
jobStatus='SUCCEEDED'
)
len(resp_finished_jobs['jobSummaryList']).should.equal(0)
len(resp_finished_jobs2['jobSummaryList']).should.equal(2)
@expected_failure
@mock_logs
@mock_ec2
@mock_ecs
@mock_iam
@mock_batch
def test_terminate_job():
ec2_client, iam_client, ecs_client, logs_client, batch_client = _get_clients()
vpc_id, subnet_id, sg_id, iam_arn = _setup(ec2_client, iam_client)
compute_name = 'test_compute_env'
resp = batch_client.create_compute_environment(
computeEnvironmentName=compute_name,
type='UNMANAGED',
state='ENABLED',
serviceRole=iam_arn
)
arn = resp['computeEnvironmentArn']
resp = batch_client.create_job_queue(
jobQueueName='test_job_queue',
state='ENABLED',
priority=123,
computeEnvironmentOrder=[
{
'order': 123,
'computeEnvironment': arn
},
]
)
queue_arn = resp['jobQueueArn']
resp = batch_client.register_job_definition(
jobDefinitionName='sleep10',
type='container',
containerProperties={
'image': 'busybox',
'vcpus': 1,
'memory': 128,
'command': ['sleep', '10']
}
)
job_def_arn = resp['jobDefinitionArn']
resp = batch_client.submit_job(
jobName='test1',
jobQueue=queue_arn,
jobDefinition=job_def_arn
)
job_id = resp['jobId']
time.sleep(2)
batch_client.terminate_job(jobId=job_id, reason='test_terminate')
time.sleep(1)
resp = batch_client.describe_jobs(jobs=[job_id])
resp['jobs'][0]['jobName'].should.equal('test1')
resp['jobs'][0]['status'].should.equal('FAILED')
resp['jobs'][0]['statusReason'].should.equal('test_terminate')

View File

@ -0,0 +1,247 @@
from __future__ import unicode_literals
import time
import datetime
import boto3
from botocore.exceptions import ClientError
import sure # noqa
from moto import mock_batch, mock_iam, mock_ec2, mock_ecs, mock_logs, mock_cloudformation
import functools
import nose
import json
DEFAULT_REGION = 'eu-central-1'
def _get_clients():
return boto3.client('ec2', region_name=DEFAULT_REGION), \
boto3.client('iam', region_name=DEFAULT_REGION), \
boto3.client('ecs', region_name=DEFAULT_REGION), \
boto3.client('logs', region_name=DEFAULT_REGION), \
boto3.client('batch', region_name=DEFAULT_REGION)
def _setup(ec2_client, iam_client):
"""
Do prerequisite setup
:return: VPC ID, Subnet ID, Security group ID, IAM Role ARN
:rtype: tuple
"""
resp = ec2_client.create_vpc(CidrBlock='172.30.0.0/24')
vpc_id = resp['Vpc']['VpcId']
resp = ec2_client.create_subnet(
AvailabilityZone='eu-central-1a',
CidrBlock='172.30.0.0/25',
VpcId=vpc_id
)
subnet_id = resp['Subnet']['SubnetId']
resp = ec2_client.create_security_group(
Description='test_sg_desc',
GroupName='test_sg',
VpcId=vpc_id
)
sg_id = resp['GroupId']
resp = iam_client.create_role(
RoleName='TestRole',
AssumeRolePolicyDocument='some_policy'
)
iam_arn = resp['Role']['Arn']
return vpc_id, subnet_id, sg_id, iam_arn
@mock_cloudformation()
@mock_ec2
@mock_ecs
@mock_iam
@mock_batch
def test_create_env_cf():
ec2_client, iam_client, ecs_client, logs_client, batch_client = _get_clients()
vpc_id, subnet_id, sg_id, iam_arn = _setup(ec2_client, iam_client)
create_environment_template = {
'Resources': {
"ComputeEnvironment": {
"Type": "AWS::Batch::ComputeEnvironment",
"Properties": {
"Type": "MANAGED",
"ComputeResources": {
"Type": "EC2",
"MinvCpus": 0,
"DesiredvCpus": 0,
"MaxvCpus": 64,
"InstanceTypes": [
"optimal"
],
"Subnets": [subnet_id],
"SecurityGroupIds": [sg_id],
"InstanceRole": iam_arn
},
"ServiceRole": iam_arn
}
}
}
}
cf_json = json.dumps(create_environment_template)
cf_conn = boto3.client('cloudformation', DEFAULT_REGION)
stack_id = cf_conn.create_stack(
StackName='test_stack',
TemplateBody=cf_json,
)['StackId']
stack_resources = cf_conn.list_stack_resources(StackName=stack_id)
stack_resources['StackResourceSummaries'][0]['ResourceStatus'].should.equal('CREATE_COMPLETE')
# Spot checks on the ARN
stack_resources['StackResourceSummaries'][0]['PhysicalResourceId'].startswith('arn:aws:batch:')
stack_resources['StackResourceSummaries'][0]['PhysicalResourceId'].should.contain('test_stack')
@mock_cloudformation()
@mock_ec2
@mock_ecs
@mock_iam
@mock_batch
def test_create_job_queue_cf():
ec2_client, iam_client, ecs_client, logs_client, batch_client = _get_clients()
vpc_id, subnet_id, sg_id, iam_arn = _setup(ec2_client, iam_client)
create_environment_template = {
'Resources': {
"ComputeEnvironment": {
"Type": "AWS::Batch::ComputeEnvironment",
"Properties": {
"Type": "MANAGED",
"ComputeResources": {
"Type": "EC2",
"MinvCpus": 0,
"DesiredvCpus": 0,
"MaxvCpus": 64,
"InstanceTypes": [
"optimal"
],
"Subnets": [subnet_id],
"SecurityGroupIds": [sg_id],
"InstanceRole": iam_arn
},
"ServiceRole": iam_arn
}
},
"JobQueue": {
"Type": "AWS::Batch::JobQueue",
"Properties": {
"Priority": 1,
"ComputeEnvironmentOrder": [
{
"Order": 1,
"ComputeEnvironment": {"Ref": "ComputeEnvironment"}
}
]
}
},
}
}
cf_json = json.dumps(create_environment_template)
cf_conn = boto3.client('cloudformation', DEFAULT_REGION)
stack_id = cf_conn.create_stack(
StackName='test_stack',
TemplateBody=cf_json,
)['StackId']
stack_resources = cf_conn.list_stack_resources(StackName=stack_id)
len(stack_resources['StackResourceSummaries']).should.equal(2)
job_queue_resource = list(filter(lambda item: item['ResourceType'] == 'AWS::Batch::JobQueue', stack_resources['StackResourceSummaries']))[0]
job_queue_resource['ResourceStatus'].should.equal('CREATE_COMPLETE')
# Spot checks on the ARN
job_queue_resource['PhysicalResourceId'].startswith('arn:aws:batch:')
job_queue_resource['PhysicalResourceId'].should.contain('test_stack')
job_queue_resource['PhysicalResourceId'].should.contain('job-queue/')
@mock_cloudformation()
@mock_ec2
@mock_ecs
@mock_iam
@mock_batch
def test_create_job_def_cf():
ec2_client, iam_client, ecs_client, logs_client, batch_client = _get_clients()
vpc_id, subnet_id, sg_id, iam_arn = _setup(ec2_client, iam_client)
create_environment_template = {
'Resources': {
"ComputeEnvironment": {
"Type": "AWS::Batch::ComputeEnvironment",
"Properties": {
"Type": "MANAGED",
"ComputeResources": {
"Type": "EC2",
"MinvCpus": 0,
"DesiredvCpus": 0,
"MaxvCpus": 64,
"InstanceTypes": [
"optimal"
],
"Subnets": [subnet_id],
"SecurityGroupIds": [sg_id],
"InstanceRole": iam_arn
},
"ServiceRole": iam_arn
}
},
"JobQueue": {
"Type": "AWS::Batch::JobQueue",
"Properties": {
"Priority": 1,
"ComputeEnvironmentOrder": [
{
"Order": 1,
"ComputeEnvironment": {"Ref": "ComputeEnvironment"}
}
]
}
},
"JobDefinition": {
"Type": "AWS::Batch::JobDefinition",
"Properties": {
"Type": "container",
"ContainerProperties": {
"Image": {
"Fn::Join": ["", ["137112412989.dkr.ecr.", {"Ref": "AWS::Region"}, ".amazonaws.com/amazonlinux:latest"]]
},
"Vcpus": 2,
"Memory": 2000,
"Command": ["echo", "Hello world"]
},
"RetryStrategy": {
"Attempts": 1
}
}
},
}
}
cf_json = json.dumps(create_environment_template)
cf_conn = boto3.client('cloudformation', DEFAULT_REGION)
stack_id = cf_conn.create_stack(
StackName='test_stack',
TemplateBody=cf_json,
)['StackId']
stack_resources = cf_conn.list_stack_resources(StackName=stack_id)
len(stack_resources['StackResourceSummaries']).should.equal(3)
job_def_resource = list(filter(lambda item: item['ResourceType'] == 'AWS::Batch::JobDefinition', stack_resources['StackResourceSummaries']))[0]
job_def_resource['ResourceStatus'].should.equal('CREATE_COMPLETE')
# Spot checks on the ARN
job_def_resource['PhysicalResourceId'].startswith('arn:aws:batch:')
job_def_resource['PhysicalResourceId'].should.contain('test_stack-JobDef')
job_def_resource['PhysicalResourceId'].should.contain('job-definition/')

Some files were not shown because too many files have changed in this diff Show More