Merge remote-tracking branch 'spulec/master'

This commit is contained in:
Alexander Mohr 2018-05-02 22:38:45 -07:00
commit 22b0298ff9
155 changed files with 7429 additions and 2081 deletions

7
.bumpversion.cfg Normal file
View File

@ -0,0 +1,7 @@
[bumpversion]
current_version = 1.3.3
[bumpversion:file:setup.py]
[bumpversion:file:moto/__init__.py]

2
.gitignore vendored
View File

@ -13,3 +13,5 @@ build/
.DS_Store .DS_Store
python_env python_env
.ropeproject/ .ropeproject/
.pytest_cache/

View File

@ -49,3 +49,7 @@ Moto is written by Steve Pulec with contributions from:
* [Michael van Tellingen](https://github.com/mvantellingen) * [Michael van Tellingen](https://github.com/mvantellingen)
* [Jessie Nadler](https://github.com/nadlerjessie) * [Jessie Nadler](https://github.com/nadlerjessie)
* [Alex Morken](https://github.com/alexmorken) * [Alex Morken](https://github.com/alexmorken)
* [Clive Li](https://github.com/cliveli)
* [Jim Shields](https://github.com/jimjshields)
* [William Richard](https://github.com/william-richard)
* [Alex Casalboni](https://github.com/alexcasalboni)

View File

@ -1,8 +1,58 @@
Moto Changelog Moto Changelog
=================== ===================
Latest 1.3.3
------ ------
* Fix a regression in S3 url regexes
* APIGateway region fixes
* ECS improvements
* Add @mock_cognitoidentity, thanks to @brcoding
1.3.2
------
The huge change in this version is that the responses library is no longer vendored. Many developers are now unblocked. Kudos to @spulec for the fix.
* Fix route53 TTL bug
* Added filtering support for S3 lifecycle
* unvendoring responses
1.3.0
------
Dozens of major endpoint additions in this release. Highlights include:
* Fixed AMI tests and the Travis build setup
* SNS improvements
* Dynamodb improvements
* EBS improvements
* Redshift improvements
* RDS snapshot improvements
* S3 improvements
* Cloudwatch improvements
* SSM improvements
* IAM improvements
* ELBV1 and ELBV2 improvements
* Lambda improvements
* EC2 spot pricing improvements
* ApiGateway improvements
* VPC improvements
1.2.0
------
* Supports filtering AMIs by self
* Implemented signal_workflow_execution for SWF
* Wired SWF backend to the moto server
* Added url decoding to x-amz-copy-source header for copying S3 files
* Revamped lambda function storage to do versioning
* IOT improvements
* RDS improvements
* Implemented CloudWatch get_metric_statistics
* Improved Cloudformation EC2 support
* Implemented Cloudformation change_set endpoints
1.1.25 1.1.25
----- -----

View File

@ -1,4 +1,25 @@
### Contributing code ### Contributing code
If you have improvements to Moto, send us your pull requests! For those Moto has a [Code of Conduct](https://github.com/spulec/moto/blob/master/CODE_OF_CONDUCT.md), you can expect to be treated with respect at all times when interacting with this project.
just getting started, Github has a [howto](https://help.github.com/articles/using-pull-requests/).
## Is there a missing feature?
Moto is easier to contribute to than you probably think. There's [a list of which endpoints have been implemented](https://github.com/spulec/moto/blob/master/IMPLEMENTATION_COVERAGE.md) and we invite you to add new endpoints to existing services or to add new services.
How to teach Moto to support a new AWS endpoint:
* Create an issue describing what's missing. This is where we'll all talk about the new addition and help you get it done.
* Create a [pull request](https://help.github.com/articles/using-pull-requests/) and mention the issue # in the PR description.
* Try to add a failing test case. For example, if you're trying to implement `boto3.client('acm').import_certificate()` you'll want to add a new method called `def test_import_certificate` to `tests/test_acm/test_acm.py`.
* If you can also implement the code that gets that test passing that's great. If not, just ask the community for a hand and somebody will assist you.
# Maintainers
## Releasing a new version of Moto
You'll need a PyPi account and a Dockerhub account to release Moto. After we release a new PyPi package we build and push the [motoserver/moto](https://hub.docker.com/r/motoserver/moto/) Docker image.
* First, `scripts/bump_version` modifies the version and opens a PR
* Then, merge the new pull request
* Finally, generate and ship the new artifacts with `make publish`

File diff suppressed because it is too large Load Diff

View File

@ -36,14 +36,13 @@ tag_github_release:
git tag `python setup.py --version` git tag `python setup.py --version`
git push origin `python setup.py --version` git push origin `python setup.py --version`
publish: implementation_coverage \ publish: upload_pypi_artifact \
upload_pypi_artifact \
tag_github_release \ tag_github_release \
push_dockerhub_image push_dockerhub_image
implementation_coverage: implementation_coverage:
./scripts/implementation_coverage.py > IMPLEMENTATION_COVERAGE.md ./scripts/implementation_coverage.py > IMPLEMENTATION_COVERAGE.md
git commit IMPLEMENTATION_COVERAGE.md -m "Updating implementation coverage" git commit IMPLEMENTATION_COVERAGE.md -m "Updating implementation coverage" || true
scaffold: scaffold:
@pip install -r requirements-dev.txt > /dev/null @pip install -r requirements-dev.txt > /dev/null

View File

@ -70,6 +70,8 @@ It gets even better! Moto isn't just for Python code and it isn't just for S3. L
|------------------------------------------------------------------------------| |------------------------------------------------------------------------------|
| CloudwatchEvents | @mock_events | all endpoints done | | CloudwatchEvents | @mock_events | all endpoints done |
|------------------------------------------------------------------------------| |------------------------------------------------------------------------------|
| Cognito Identity | @mock_cognitoidentity| basic endpoints done |
|------------------------------------------------------------------------------|
| Data Pipeline | @mock_datapipeline| basic endpoints done | | Data Pipeline | @mock_datapipeline| basic endpoints done |
|------------------------------------------------------------------------------| |------------------------------------------------------------------------------|
| DynamoDB | @mock_dynamodb | core endpoints done | | DynamoDB | @mock_dynamodb | core endpoints done |

View File

@ -20,7 +20,7 @@ If you want to install ``moto`` from source::
Moto usage Moto usage
---------- ----------
For example we have the following code we want to test: For example, we have the following code we want to test:
.. sourcecode:: python .. sourcecode:: python
@ -39,12 +39,12 @@ For example we have the following code we want to test:
k.key = self.name k.key = self.name
k.set_contents_from_string(self.value) k.set_contents_from_string(self.value)
There are several method to do this, just keep in mind Moto creates a full blank environment. There are several ways to do this, but you should keep in mind that Moto creates a full, blank environment.
Decorator Decorator
~~~~~~~~~ ~~~~~~~~~
With a decorator wrapping all the calls to S3 are automatically mocked out. With a decorator wrapping, all the calls to S3 are automatically mocked out.
.. sourcecode:: python .. sourcecode:: python
@ -66,7 +66,7 @@ With a decorator wrapping all the calls to S3 are automatically mocked out.
Context manager Context manager
~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~
Same as decorator, every call inside ``with`` statement are mocked out. Same as the Decorator, every call inside the ``with`` statement is mocked out.
.. sourcecode:: python .. sourcecode:: python
@ -83,7 +83,7 @@ Same as decorator, every call inside ``with`` statement are mocked out.
Raw Raw
~~~ ~~~
You can also start and stop manually the mocking. You can also start and stop the mocking manually.
.. sourcecode:: python .. sourcecode:: python
@ -104,11 +104,11 @@ You can also start and stop manually the mocking.
Stand-alone server mode Stand-alone server mode
~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~
Moto comes with a stand-alone server allowing you to mock out an AWS HTTP endpoint. It is very useful to test even if you don't use Python. Moto also comes with a stand-alone server allowing you to mock out an AWS HTTP endpoint. For testing purposes, it's extremely useful even if you don't use Python.
.. sourcecode:: bash .. sourcecode:: bash
$ moto_server ec2 -p3000 $ moto_server ec2 -p3000
* Running on http://127.0.0.1:3000/ * Running on http://127.0.0.1:3000/
This method isn't encouraged if you're using ``boto``, best is to use decorator method. However, this method isn't encouraged if you're using ``boto``, the best solution would be to use a decorator method.

View File

@ -3,7 +3,7 @@ import logging
# logging.getLogger('boto').setLevel(logging.CRITICAL) # logging.getLogger('boto').setLevel(logging.CRITICAL)
__title__ = 'moto' __title__ = 'moto'
__version__ = '1.0.1' __version__ = '1.3.3'
from .acm import mock_acm # flake8: noqa from .acm import mock_acm # flake8: noqa
from .apigateway import mock_apigateway, mock_apigateway_deprecated # flake8: noqa from .apigateway import mock_apigateway, mock_apigateway_deprecated # flake8: noqa
@ -11,6 +11,7 @@ from .autoscaling import mock_autoscaling, mock_autoscaling_deprecated # flake8
from .awslambda import mock_lambda, mock_lambda_deprecated # flake8: noqa from .awslambda import mock_lambda, mock_lambda_deprecated # flake8: noqa
from .cloudformation import mock_cloudformation, mock_cloudformation_deprecated # flake8: noqa from .cloudformation import mock_cloudformation, mock_cloudformation_deprecated # flake8: noqa
from .cloudwatch import mock_cloudwatch, mock_cloudwatch_deprecated # flake8: noqa from .cloudwatch import mock_cloudwatch, mock_cloudwatch_deprecated # flake8: noqa
from .cognitoidentity import mock_cognitoidentity, mock_cognitoidentity_deprecated # flake8: noqa
from .datapipeline import mock_datapipeline, mock_datapipeline_deprecated # flake8: noqa from .datapipeline import mock_datapipeline, mock_datapipeline_deprecated # flake8: noqa
from .dynamodb import mock_dynamodb, mock_dynamodb_deprecated # flake8: noqa from .dynamodb import mock_dynamodb, mock_dynamodb_deprecated # flake8: noqa
from .dynamodb2 import mock_dynamodb2, mock_dynamodb2_deprecated # flake8: noqa from .dynamodb2 import mock_dynamodb2, mock_dynamodb2_deprecated # flake8: noqa

View File

@ -1,12 +1,14 @@
from __future__ import absolute_import from __future__ import absolute_import
from __future__ import unicode_literals from __future__ import unicode_literals
import datetime import random
import string
import requests import requests
import time
from moto.packages.responses import responses from boto3.session import Session
import responses
from moto.core import BaseBackend, BaseModel from moto.core import BaseBackend, BaseModel
from moto.core.utils import iso_8601_datetime_with_milliseconds
from .utils import create_id from .utils import create_id
from .exceptions import StageNotFoundException from .exceptions import StageNotFoundException
@ -20,8 +22,7 @@ class Deployment(BaseModel, dict):
self['id'] = deployment_id self['id'] = deployment_id
self['stageName'] = name self['stageName'] = name
self['description'] = description self['description'] = description
self['createdDate'] = iso_8601_datetime_with_milliseconds( self['createdDate'] = int(time.time())
datetime.datetime.now())
class IntegrationResponse(BaseModel, dict): class IntegrationResponse(BaseModel, dict):
@ -293,6 +294,25 @@ class Stage(BaseModel, dict):
raise Exception('Patch operation "%s" not implemented' % op['op']) raise Exception('Patch operation "%s" not implemented' % op['op'])
class ApiKey(BaseModel, dict):
def __init__(self, name=None, description=None, enabled=True,
generateDistinctId=False, value=None, stageKeys=None, customerId=None):
super(ApiKey, self).__init__()
self['id'] = create_id()
if generateDistinctId:
# Best guess of what AWS does internally
self['value'] = ''.join(random.sample(string.ascii_letters + string.digits, 40))
else:
self['value'] = value
self['name'] = name
self['customerId'] = customerId
self['description'] = description
self['enabled'] = enabled
self['createdDate'] = self['lastUpdatedDate'] = int(time.time())
self['stageKeys'] = stageKeys
class RestAPI(BaseModel): class RestAPI(BaseModel):
def __init__(self, id, region_name, name, description): def __init__(self, id, region_name, name, description):
@ -300,7 +320,7 @@ class RestAPI(BaseModel):
self.region_name = region_name self.region_name = region_name
self.name = name self.name = name
self.description = description self.description = description
self.create_date = datetime.datetime.utcnow() self.create_date = int(time.time())
self.deployments = {} self.deployments = {}
self.stages = {} self.stages = {}
@ -313,7 +333,7 @@ class RestAPI(BaseModel):
"id": self.id, "id": self.id,
"name": self.name, "name": self.name,
"description": self.description, "description": self.description,
"createdDate": iso_8601_datetime_with_milliseconds(self.create_date), "createdDate": int(time.time()),
} }
def add_child(self, path, parent_id=None): def add_child(self, path, parent_id=None):
@ -388,6 +408,7 @@ class APIGatewayBackend(BaseBackend):
def __init__(self, region_name): def __init__(self, region_name):
super(APIGatewayBackend, self).__init__() super(APIGatewayBackend, self).__init__()
self.apis = {} self.apis = {}
self.keys = {}
self.region_name = region_name self.region_name = region_name
def reset(self): def reset(self):
@ -541,8 +562,22 @@ class APIGatewayBackend(BaseBackend):
api = self.get_rest_api(function_id) api = self.get_rest_api(function_id)
return api.delete_deployment(deployment_id) return api.delete_deployment(deployment_id)
def create_apikey(self, payload):
key = ApiKey(**payload)
self.keys[key['id']] = key
return key
def get_apikeys(self):
return list(self.keys.values())
def get_apikey(self, api_key_id):
return self.keys[api_key_id]
def delete_apikey(self, api_key_id):
self.keys.pop(api_key_id)
return {}
apigateway_backends = {} apigateway_backends = {}
# Not available in boto yet for region_name in Session().get_available_regions('apigateway'):
for region_name in ['us-east-1', 'us-west-2', 'eu-west-1', 'ap-northeast-1']:
apigateway_backends[region_name] = APIGatewayBackend(region_name) apigateway_backends[region_name] = APIGatewayBackend(region_name)

View File

@ -226,3 +226,25 @@ class APIGatewayResponse(BaseResponse):
deployment = self.backend.delete_deployment( deployment = self.backend.delete_deployment(
function_id, deployment_id) function_id, deployment_id)
return 200, {}, json.dumps(deployment) return 200, {}, json.dumps(deployment)
def apikeys(self, request, full_url, headers):
self.setup_class(request, full_url, headers)
if self.method == 'POST':
apikey_response = self.backend.create_apikey(json.loads(self.body))
elif self.method == 'GET':
apikeys_response = self.backend.get_apikeys()
return 200, {}, json.dumps({"item": apikeys_response})
return 200, {}, json.dumps(apikey_response)
def apikey_individual(self, request, full_url, headers):
self.setup_class(request, full_url, headers)
url_path_parts = self.path.split("/")
apikey = url_path_parts[2]
if self.method == 'GET':
apikey_response = self.backend.get_apikey(apikey)
elif self.method == 'DELETE':
apikey_response = self.backend.delete_apikey(apikey)
return 200, {}, json.dumps(apikey_response)

View File

@ -18,4 +18,6 @@ url_paths = {
'{0}/restapis/(?P<function_id>[^/]+)/resources/(?P<resource_id>[^/]+)/methods/(?P<method_name>[^/]+)/responses/(?P<status_code>\d+)$': APIGatewayResponse().resource_method_responses, '{0}/restapis/(?P<function_id>[^/]+)/resources/(?P<resource_id>[^/]+)/methods/(?P<method_name>[^/]+)/responses/(?P<status_code>\d+)$': APIGatewayResponse().resource_method_responses,
'{0}/restapis/(?P<function_id>[^/]+)/resources/(?P<resource_id>[^/]+)/methods/(?P<method_name>[^/]+)/integration/?$': APIGatewayResponse().integrations, '{0}/restapis/(?P<function_id>[^/]+)/resources/(?P<resource_id>[^/]+)/methods/(?P<method_name>[^/]+)/integration/?$': APIGatewayResponse().integrations,
'{0}/restapis/(?P<function_id>[^/]+)/resources/(?P<resource_id>[^/]+)/methods/(?P<method_name>[^/]+)/integration/responses/(?P<status_code>\d+)/?$': APIGatewayResponse().integration_responses, '{0}/restapis/(?P<function_id>[^/]+)/resources/(?P<resource_id>[^/]+)/methods/(?P<method_name>[^/]+)/integration/responses/(?P<status_code>\d+)/?$': APIGatewayResponse().integration_responses,
'{0}/apikeys$': APIGatewayResponse().apikeys,
'{0}/apikeys/(?P<apikey>[^/]+)': APIGatewayResponse().apikey_individual,
} }

View File

@ -3,11 +3,12 @@ from moto.core.exceptions import RESTError
class AutoscalingClientError(RESTError): class AutoscalingClientError(RESTError):
code = 400
class ResourceContentionError(RESTError):
code = 500 code = 500
class ResourceContentionError(AutoscalingClientError):
def __init__(self): def __init__(self):
super(ResourceContentionError, self).__init__( super(ResourceContentionError, self).__init__(
"ResourceContentionError", "ResourceContentionError",

View File

@ -7,7 +7,7 @@ from moto.elb import elb_backends
from moto.elbv2 import elbv2_backends from moto.elbv2 import elbv2_backends
from moto.elb.exceptions import LoadBalancerNotFoundError from moto.elb.exceptions import LoadBalancerNotFoundError
from .exceptions import ( from .exceptions import (
ResourceContentionError, AutoscalingClientError, ResourceContentionError,
) )
# http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AS_Concepts.html#Cooldown # http://docs.aws.amazon.com/AutoScaling/latest/DeveloperGuide/AS_Concepts.html#Cooldown
@ -155,14 +155,21 @@ class FakeAutoScalingGroup(BaseModel):
autoscaling_backend, tags): autoscaling_backend, tags):
self.autoscaling_backend = autoscaling_backend self.autoscaling_backend = autoscaling_backend
self.name = name self.name = name
if not availability_zones and not vpc_zone_identifier:
raise AutoscalingClientError(
"ValidationError",
"At least one Availability Zone or VPC Subnet is required."
)
self.availability_zones = availability_zones self.availability_zones = availability_zones
self.vpc_zone_identifier = vpc_zone_identifier
self.max_size = max_size self.max_size = max_size
self.min_size = min_size self.min_size = min_size
self.launch_config = self.autoscaling_backend.launch_configurations[ self.launch_config = self.autoscaling_backend.launch_configurations[
launch_config_name] launch_config_name]
self.launch_config_name = launch_config_name self.launch_config_name = launch_config_name
self.vpc_zone_identifier = vpc_zone_identifier
self.default_cooldown = default_cooldown if default_cooldown else DEFAULT_COOLDOWN self.default_cooldown = default_cooldown if default_cooldown else DEFAULT_COOLDOWN
self.health_check_period = health_check_period self.health_check_period = health_check_period
@ -172,6 +179,7 @@ class FakeAutoScalingGroup(BaseModel):
self.placement_group = placement_group self.placement_group = placement_group
self.termination_policies = termination_policies self.termination_policies = termination_policies
self.suspended_processes = []
self.instance_states = [] self.instance_states = []
self.tags = tags if tags else [] self.tags = tags if tags else []
self.set_desired_capacity(desired_capacity) self.set_desired_capacity(desired_capacity)
@ -614,6 +622,10 @@ class AutoScalingBackend(BaseBackend):
asg_targets = [{'id': x.instance.id} for x in group.instance_states] asg_targets = [{'id': x.instance.id} for x in group.instance_states]
self.elbv2_backend.deregister_targets(target_group, (asg_targets)) self.elbv2_backend.deregister_targets(target_group, (asg_targets))
def suspend_processes(self, group_name, scaling_processes):
group = self.autoscaling_groups[group_name]
group.suspended_processes = scaling_processes or []
autoscaling_backends = {} autoscaling_backends = {}
for region, ec2_backend in ec2_backends.items(): for region, ec2_backend in ec2_backends.items():

View File

@ -166,7 +166,7 @@ class AutoScalingResponse(BaseResponse):
start = all_names.index(token) + 1 start = all_names.index(token) + 1
else: else:
start = 0 start = 0
max_records = self._get_param("MaxRecords", 50) max_records = self._get_int_param("MaxRecords", 50)
if max_records > 100: if max_records > 100:
raise ValueError raise ValueError
groups = all_groups[start:start + max_records] groups = all_groups[start:start + max_records]
@ -283,6 +283,13 @@ class AutoScalingResponse(BaseResponse):
template = self.response_template(DETACH_LOAD_BALANCERS_TEMPLATE) template = self.response_template(DETACH_LOAD_BALANCERS_TEMPLATE)
return template.render() return template.render()
def suspend_processes(self):
autoscaling_group_name = self._get_param('AutoScalingGroupName')
scaling_processes = self._get_multi_param('ScalingProcesses.member')
self.autoscaling_backend.suspend_processes(autoscaling_group_name, scaling_processes)
template = self.response_template(SUSPEND_PROCESSES_TEMPLATE)
return template.render()
CREATE_LAUNCH_CONFIGURATION_TEMPLATE = """<CreateLaunchConfigurationResponse xmlns="http://autoscaling.amazonaws.com/doc/2011-01-01/"> CREATE_LAUNCH_CONFIGURATION_TEMPLATE = """<CreateLaunchConfigurationResponse xmlns="http://autoscaling.amazonaws.com/doc/2011-01-01/">
<ResponseMetadata> <ResponseMetadata>
@ -463,7 +470,14 @@ DESCRIBE_AUTOSCALING_GROUPS_TEMPLATE = """<DescribeAutoScalingGroupsResponse xml
</member> </member>
{% endfor %} {% endfor %}
</Tags> </Tags>
<SuspendedProcesses/> <SuspendedProcesses>
{% for suspended_process in group.suspended_processes %}
<member>
<ProcessName>{{suspended_process}}</ProcessName>
<SuspensionReason></SuspensionReason>
</member>
{% endfor %}
</SuspendedProcesses>
<AutoScalingGroupName>{{ group.name }}</AutoScalingGroupName> <AutoScalingGroupName>{{ group.name }}</AutoScalingGroupName>
<HealthCheckType>{{ group.health_check_type }}</HealthCheckType> <HealthCheckType>{{ group.health_check_type }}</HealthCheckType>
<CreatedTime>2013-05-06T17:47:15.107Z</CreatedTime> <CreatedTime>2013-05-06T17:47:15.107Z</CreatedTime>
@ -644,6 +658,12 @@ DETACH_LOAD_BALANCERS_TEMPLATE = """<DetachLoadBalancersResponse xmlns="http://a
</ResponseMetadata> </ResponseMetadata>
</DetachLoadBalancersResponse>""" </DetachLoadBalancersResponse>"""
SUSPEND_PROCESSES_TEMPLATE = """<SuspendProcessesResponse xmlns="http://autoscaling.amazonaws.com/doc/2011-01-01/">
<ResponseMetadata>
<RequestId>7c6e177f-f082-11e1-ac58-3714bEXAMPLE</RequestId>
</ResponseMetadata>
</SuspendProcessesResponse>"""
SET_INSTANCE_HEALTH_TEMPLATE = """<SetInstanceHealthResponse xmlns="http://autoscaling.amazonaws.com/doc/2011-01-01/"> SET_INSTANCE_HEALTH_TEMPLATE = """<SetInstanceHealthResponse xmlns="http://autoscaling.amazonaws.com/doc/2011-01-01/">
<SetInstanceHealthResponse></SetInstanceHealthResponse> <SetInstanceHealthResponse></SetInstanceHealthResponse>
<ResponseMetadata> <ResponseMetadata>

View File

@ -104,7 +104,7 @@ class _DockerDataVolumeContext:
# It doesn't exist so we need to create it # It doesn't exist so we need to create it
self._vol_ref.volume = self._lambda_func.docker_client.volumes.create(self._lambda_func.code_sha_256) self._vol_ref.volume = self._lambda_func.docker_client.volumes.create(self._lambda_func.code_sha_256)
container = self._lambda_func.docker_client.containers.run('alpine', 'sleep 100', volumes={self.name: '/tmp/data'}, detach=True) container = self._lambda_func.docker_client.containers.run('alpine', 'sleep 100', volumes={self.name: {'bind': '/tmp/data', 'mode': 'rw'}}, detach=True)
try: try:
tar_bytes = zip2tar(self._lambda_func.code_bytes) tar_bytes = zip2tar(self._lambda_func.code_bytes)
container.put_archive('/tmp/data', tar_bytes) container.put_archive('/tmp/data', tar_bytes)
@ -309,7 +309,7 @@ class LambdaFunction(BaseModel):
finally: finally:
if container: if container:
try: try:
exit_code = container.wait(timeout=300) exit_code = container.wait(timeout=300)['StatusCode']
except requests.exceptions.ReadTimeout: except requests.exceptions.ReadTimeout:
exit_code = -1 exit_code = -1
container.stop() container.stop()
@ -603,7 +603,7 @@ class LambdaBackend(BaseBackend):
def list_functions(self): def list_functions(self):
return self._lambdas.all() return self._lambdas.all()
def send_message(self, function_name, message, subject=None): def send_message(self, function_name, message, subject=None, qualifier=None):
event = { event = {
"Records": [ "Records": [
{ {
@ -636,8 +636,8 @@ class LambdaBackend(BaseBackend):
] ]
} }
self._functions[function_name][-1].invoke(json.dumps(event), {}, {}) func = self._lambdas.get_function(function_name, qualifier)
pass func.invoke(json.dumps(event), {}, {})
def list_tags(self, resource): def list_tags(self, resource):
return self.get_function_by_arn(resource).tags return self.get_function_by_arn(resource).tags

View File

@ -94,25 +94,21 @@ class LambdaResponse(BaseResponse):
return self._add_policy(request, full_url, headers) return self._add_policy(request, full_url, headers)
def _add_policy(self, request, full_url, headers): def _add_policy(self, request, full_url, headers):
lambda_backend = self.get_lambda_backend(full_url)
path = request.path if hasattr(request, 'path') else request.path_url path = request.path if hasattr(request, 'path') else request.path_url
function_name = path.split('/')[-2] function_name = path.split('/')[-2]
if lambda_backend.has_function(function_name): if self.lambda_backend.get_function(function_name):
policy = request.body.decode('utf8') policy = request.body.decode('utf8')
lambda_backend.add_policy(function_name, policy) self.lambda_backend.add_policy(function_name, policy)
return 200, {}, json.dumps(dict(Statement=policy)) return 200, {}, json.dumps(dict(Statement=policy))
else: else:
return 404, {}, "{}" return 404, {}, "{}"
def _get_policy(self, request, full_url, headers): def _get_policy(self, request, full_url, headers):
lambda_backend = self.get_lambda_backend(full_url)
path = request.path if hasattr(request, 'path') else request.path_url path = request.path if hasattr(request, 'path') else request.path_url
function_name = path.split('/')[-2] function_name = path.split('/')[-2]
if lambda_backend.has_function(function_name): if self.lambda_backend.get_function(function_name):
function = lambda_backend.get_function(function_name) lambda_function = self.lambda_backend.get_function(function_name)
return 200, {}, json.dumps(dict(Policy="{\"Statement\":[" + function.policy + "]}")) return 200, {}, json.dumps(dict(Policy="{\"Statement\":[" + lambda_function.policy + "]}"))
else: else:
return 404, {}, "{}" return 404, {}, "{}"

View File

@ -6,6 +6,7 @@ from moto.autoscaling import autoscaling_backends
from moto.awslambda import lambda_backends from moto.awslambda import lambda_backends
from moto.cloudformation import cloudformation_backends from moto.cloudformation import cloudformation_backends
from moto.cloudwatch import cloudwatch_backends from moto.cloudwatch import cloudwatch_backends
from moto.cognitoidentity import cognitoidentity_backends
from moto.core import moto_api_backends from moto.core import moto_api_backends
from moto.datapipeline import datapipeline_backends from moto.datapipeline import datapipeline_backends
from moto.dynamodb import dynamodb_backends from moto.dynamodb import dynamodb_backends
@ -34,6 +35,7 @@ from moto.sns import sns_backends
from moto.sqs import sqs_backends from moto.sqs import sqs_backends
from moto.ssm import ssm_backends from moto.ssm import ssm_backends
from moto.sts import sts_backends from moto.sts import sts_backends
from moto.swf import swf_backends
from moto.xray import xray_backends from moto.xray import xray_backends
from moto.iot import iot_backends from moto.iot import iot_backends
from moto.iotdata import iotdata_backends from moto.iotdata import iotdata_backends
@ -48,6 +50,7 @@ BACKENDS = {
'batch': batch_backends, 'batch': batch_backends,
'cloudformation': cloudformation_backends, 'cloudformation': cloudformation_backends,
'cloudwatch': cloudwatch_backends, 'cloudwatch': cloudwatch_backends,
'cognito-identity': cognitoidentity_backends,
'datapipeline': datapipeline_backends, 'datapipeline': datapipeline_backends,
'dynamodb': dynamodb_backends, 'dynamodb': dynamodb_backends,
'dynamodb2': dynamodb_backends2, 'dynamodb2': dynamodb_backends2,
@ -76,6 +79,7 @@ BACKENDS = {
'sqs': sqs_backends, 'sqs': sqs_backends,
'ssm': ssm_backends, 'ssm': ssm_backends,
'sts': sts_backends, 'sts': sts_backends,
'swf': swf_backends,
'route53': route53_backends, 'route53': route53_backends,
'lambda': lambda_backends, 'lambda': lambda_backends,
'xray': xray_backends, 'xray': xray_backends,

View File

@ -107,7 +107,8 @@ class FakeStack(BaseModel):
def update(self, template, role_arn=None, parameters=None, tags=None): def update(self, template, role_arn=None, parameters=None, tags=None):
self._add_stack_event("UPDATE_IN_PROGRESS", resource_status_reason="User Initiated") self._add_stack_event("UPDATE_IN_PROGRESS", resource_status_reason="User Initiated")
self.template = template self.template = template
self.resource_map.update(json.loads(template), parameters) self._parse_template()
self.resource_map.update(self.template_dict, parameters)
self.output_map = self._create_output_map() self.output_map = self._create_output_map()
self._add_stack_event("UPDATE_COMPLETE") self._add_stack_event("UPDATE_COMPLETE")
self.status = "UPDATE_COMPLETE" self.status = "UPDATE_COMPLETE"
@ -188,6 +189,24 @@ class CloudFormationBackend(BaseBackend):
self.change_sets[change_set_id] = stack self.change_sets[change_set_id] = stack
return change_set_id, stack.stack_id return change_set_id, stack.stack_id
def execute_change_set(self, change_set_name, stack_name=None):
stack = None
if change_set_name in self.change_sets:
# This means arn was passed in
stack = self.change_sets[change_set_name]
else:
for cs in self.change_sets:
if self.change_sets[cs].name == change_set_name:
stack = self.change_sets[cs]
if stack is None:
raise ValidationError(stack_name)
if stack.events[-1].resource_status == 'REVIEW_IN_PROGRESS':
stack._add_stack_event('CREATE_COMPLETE')
else:
stack._add_stack_event('UPDATE_IN_PROGRESS')
stack._add_stack_event('UPDATE_COMPLETE')
return True
def describe_stacks(self, name_or_stack_id): def describe_stacks(self, name_or_stack_id):
stacks = self.stacks.values() stacks = self.stacks.values()
if name_or_stack_id: if name_or_stack_id:

View File

@ -10,6 +10,7 @@ from moto.autoscaling import models as autoscaling_models
from moto.awslambda import models as lambda_models from moto.awslambda import models as lambda_models
from moto.batch import models as batch_models from moto.batch import models as batch_models
from moto.cloudwatch import models as cloudwatch_models from moto.cloudwatch import models as cloudwatch_models
from moto.cognitoidentity import models as cognitoidentity_models
from moto.datapipeline import models as datapipeline_models from moto.datapipeline import models as datapipeline_models
from moto.dynamodb import models as dynamodb_models from moto.dynamodb import models as dynamodb_models
from moto.ec2 import models as ec2_models from moto.ec2 import models as ec2_models
@ -65,6 +66,7 @@ MODEL_MAP = {
"AWS::ElasticLoadBalancingV2::LoadBalancer": elbv2_models.FakeLoadBalancer, "AWS::ElasticLoadBalancingV2::LoadBalancer": elbv2_models.FakeLoadBalancer,
"AWS::ElasticLoadBalancingV2::TargetGroup": elbv2_models.FakeTargetGroup, "AWS::ElasticLoadBalancingV2::TargetGroup": elbv2_models.FakeTargetGroup,
"AWS::ElasticLoadBalancingV2::Listener": elbv2_models.FakeListener, "AWS::ElasticLoadBalancingV2::Listener": elbv2_models.FakeListener,
"AWS::Cognito::IdentityPool": cognitoidentity_models.CognitoIdentity,
"AWS::DataPipeline::Pipeline": datapipeline_models.Pipeline, "AWS::DataPipeline::Pipeline": datapipeline_models.Pipeline,
"AWS::IAM::InstanceProfile": iam_models.InstanceProfile, "AWS::IAM::InstanceProfile": iam_models.InstanceProfile,
"AWS::IAM::Role": iam_models.Role, "AWS::IAM::Role": iam_models.Role,
@ -106,6 +108,8 @@ NULL_MODELS = [
"AWS::CloudFormation::WaitConditionHandle", "AWS::CloudFormation::WaitConditionHandle",
] ]
DEFAULT_REGION = 'us-east-1'
logger = logging.getLogger("moto") logger = logging.getLogger("moto")
@ -203,6 +207,14 @@ def clean_json(resource_json, resources_map):
if any(values): if any(values):
return values[0] return values[0]
if 'Fn::GetAZs' in resource_json:
region = resource_json.get('Fn::GetAZs') or DEFAULT_REGION
result = []
# TODO: make this configurable, to reflect the real AWS AZs
for az in ('a', 'b', 'c', 'd'):
result.append('%s%s' % (region, az))
return result
cleaned_json = {} cleaned_json = {}
for key, value in resource_json.items(): for key, value in resource_json.items():
cleaned_val = clean_json(value, resources_map) cleaned_val = clean_json(value, resources_map)

View File

@ -118,6 +118,24 @@ class CloudFormationResponse(BaseResponse):
template = self.response_template(CREATE_CHANGE_SET_RESPONSE_TEMPLATE) template = self.response_template(CREATE_CHANGE_SET_RESPONSE_TEMPLATE)
return template.render(stack_id=stack_id, change_set_id=change_set_id) return template.render(stack_id=stack_id, change_set_id=change_set_id)
@amzn_request_id
def execute_change_set(self):
stack_name = self._get_param('StackName')
change_set_name = self._get_param('ChangeSetName')
self.cloudformation_backend.execute_change_set(
stack_name=stack_name,
change_set_name=change_set_name,
)
if self.request_json:
return json.dumps({
'ExecuteChangeSetResponse': {
'ExecuteChangeSetResult': {},
}
})
else:
template = self.response_template(EXECUTE_CHANGE_SET_RESPONSE_TEMPLATE)
return template.render()
def describe_stacks(self): def describe_stacks(self):
stack_name_or_id = None stack_name_or_id = None
if self._get_param('StackName'): if self._get_param('StackName'):
@ -203,19 +221,25 @@ class CloudFormationResponse(BaseResponse):
stack_name = self._get_param('StackName') stack_name = self._get_param('StackName')
role_arn = self._get_param('RoleARN') role_arn = self._get_param('RoleARN')
template_url = self._get_param('TemplateURL') template_url = self._get_param('TemplateURL')
stack_body = self._get_param('TemplateBody')
stack = self.cloudformation_backend.get_stack(stack_name)
if self._get_param('UsePreviousTemplate') == "true": if self._get_param('UsePreviousTemplate') == "true":
stack_body = self.cloudformation_backend.get_stack( stack_body = stack.template
stack_name).template elif not stack_body and template_url:
elif template_url:
stack_body = self._get_stack_from_s3_url(template_url) stack_body = self._get_stack_from_s3_url(template_url)
else:
stack_body = self._get_param('TemplateBody')
incoming_params = self._get_list_prefix("Parameters.member")
parameters = dict([ parameters = dict([
(parameter['parameter_key'], parameter['parameter_value']) (parameter['parameter_key'], parameter['parameter_value'])
for parameter for parameter
in self._get_list_prefix("Parameters.member") in incoming_params if 'parameter_value' in parameter
]) ])
previous = dict([
(parameter['parameter_key'], stack.parameters[parameter['parameter_key']])
for parameter
in incoming_params if 'use_previous_value' in parameter
])
parameters.update(previous)
# boto3 is supposed to let you clear the tags by passing an empty value, but the request body doesn't # boto3 is supposed to let you clear the tags by passing an empty value, but the request body doesn't
# end up containing anything we can use to differentiate between passing an empty value versus not # end up containing anything we can use to differentiate between passing an empty value versus not
# passing anything. so until that changes, moto won't be able to clear tags, only update them. # passing anything. so until that changes, moto won't be able to clear tags, only update them.
@ -302,6 +326,16 @@ CREATE_CHANGE_SET_RESPONSE_TEMPLATE = """<CreateStackResponse>
</CreateStackResponse> </CreateStackResponse>
""" """
EXECUTE_CHANGE_SET_RESPONSE_TEMPLATE = """<ExecuteChangeSetResponse>
<ExecuteChangeSetResult>
<ExecuteChangeSetResult/>
</ExecuteChangeSetResult>
<ResponseMetadata>
<RequestId>{{ request_id }}</RequestId>
</ResponseMetadata>
</ExecuteChangeSetResponse>
"""
DESCRIBE_STACKS_TEMPLATE = """<DescribeStacksResponse> DESCRIBE_STACKS_TEMPLATE = """<DescribeStacksResponse>
<DescribeStacksResult> <DescribeStacksResult>
<Stacks> <Stacks>

View File

@ -74,18 +74,18 @@ class FakeAlarm(BaseModel):
self.state_reason = '' self.state_reason = ''
self.state_reason_data = '{}' self.state_reason_data = '{}'
self.state = 'OK' self.state_value = 'OK'
self.state_updated_timestamp = datetime.utcnow() self.state_updated_timestamp = datetime.utcnow()
def update_state(self, reason, reason_data, state_value): def update_state(self, reason, reason_data, state_value):
# History type, that then decides what the rest of the items are, can be one of ConfigurationUpdate | StateUpdate | Action # History type, that then decides what the rest of the items are, can be one of ConfigurationUpdate | StateUpdate | Action
self.history.append( self.history.append(
('StateUpdate', self.state_reason, self.state_reason_data, self.state, self.state_updated_timestamp) ('StateUpdate', self.state_reason, self.state_reason_data, self.state_value, self.state_updated_timestamp)
) )
self.state_reason = reason self.state_reason = reason
self.state_reason_data = reason_data self.state_reason_data = reason_data
self.state = state_value self.state_value = state_value
self.state_updated_timestamp = datetime.utcnow() self.state_updated_timestamp = datetime.utcnow()
@ -221,7 +221,7 @@ class CloudWatchBackend(BaseBackend):
] ]
def get_alarms_by_state_value(self, target_state): def get_alarms_by_state_value(self, target_state):
return filter(lambda alarm: alarm.state == target_state, self.alarms.values()) return filter(lambda alarm: alarm.state_value == target_state, self.alarms.values())
def delete_alarms(self, alarm_names): def delete_alarms(self, alarm_names):
for alarm_name in alarm_names: for alarm_name in alarm_names:

View File

@ -0,0 +1,7 @@
from __future__ import unicode_literals
from .models import cognitoidentity_backends
from ..core.models import base_decorator, deprecated_base_decorator
cognitoidentity_backend = cognitoidentity_backends['us-east-1']
mock_cognitoidentity = base_decorator(cognitoidentity_backends)
mock_cognitoidentity_deprecated = deprecated_base_decorator(cognitoidentity_backends)

View File

@ -0,0 +1,101 @@
from __future__ import unicode_literals
import datetime
import json
import boto.cognito.identity
from moto.compat import OrderedDict
from moto.core import BaseBackend, BaseModel
from moto.core.utils import iso_8601_datetime_with_milliseconds
from .utils import get_random_identity_id
class CognitoIdentity(BaseModel):
def __init__(self, region, identity_pool_name, **kwargs):
self.identity_pool_name = identity_pool_name
self.allow_unauthenticated_identities = kwargs.get('allow_unauthenticated_identities', '')
self.supported_login_providers = kwargs.get('supported_login_providers', {})
self.developer_provider_name = kwargs.get('developer_provider_name', '')
self.open_id_connect_provider_arns = kwargs.get('open_id_connect_provider_arns', [])
self.cognito_identity_providers = kwargs.get('cognito_identity_providers', [])
self.saml_provider_arns = kwargs.get('saml_provider_arns', [])
self.identity_pool_id = get_random_identity_id(region)
self.creation_time = datetime.datetime.utcnow()
class CognitoIdentityBackend(BaseBackend):
def __init__(self, region):
super(CognitoIdentityBackend, self).__init__()
self.region = region
self.identity_pools = OrderedDict()
def reset(self):
region = self.region
self.__dict__ = {}
self.__init__(region)
def create_identity_pool(self, identity_pool_name, allow_unauthenticated_identities,
supported_login_providers, developer_provider_name, open_id_connect_provider_arns,
cognito_identity_providers, saml_provider_arns):
new_identity = CognitoIdentity(self.region, identity_pool_name,
allow_unauthenticated_identities=allow_unauthenticated_identities,
supported_login_providers=supported_login_providers,
developer_provider_name=developer_provider_name,
open_id_connect_provider_arns=open_id_connect_provider_arns,
cognito_identity_providers=cognito_identity_providers,
saml_provider_arns=saml_provider_arns)
self.identity_pools[new_identity.identity_pool_id] = new_identity
response = json.dumps({
'IdentityPoolId': new_identity.identity_pool_id,
'IdentityPoolName': new_identity.identity_pool_name,
'AllowUnauthenticatedIdentities': new_identity.allow_unauthenticated_identities,
'SupportedLoginProviders': new_identity.supported_login_providers,
'DeveloperProviderName': new_identity.developer_provider_name,
'OpenIdConnectProviderARNs': new_identity.open_id_connect_provider_arns,
'CognitoIdentityProviders': new_identity.cognito_identity_providers,
'SamlProviderARNs': new_identity.saml_provider_arns
})
return response
def get_id(self):
identity_id = {'IdentityId': get_random_identity_id(self.region)}
return json.dumps(identity_id)
def get_credentials_for_identity(self, identity_id):
duration = 90
now = datetime.datetime.utcnow()
expiration = now + datetime.timedelta(seconds=duration)
expiration_str = str(iso_8601_datetime_with_milliseconds(expiration))
response = json.dumps(
{
"Credentials":
{
"AccessKeyId": "TESTACCESSKEY12345",
"Expiration": expiration_str,
"SecretKey": "ABCSECRETKEY",
"SessionToken": "ABC12345"
},
"IdentityId": identity_id
})
return response
def get_open_id_token_for_developer_identity(self, identity_id):
response = json.dumps(
{
"IdentityId": identity_id,
"Token": get_random_identity_id(self.region)
})
return response
cognitoidentity_backends = {}
for region in boto.cognito.identity.regions():
cognitoidentity_backends[region.name] = CognitoIdentityBackend(region.name)

View File

@ -0,0 +1,34 @@
from __future__ import unicode_literals
from moto.core.responses import BaseResponse
from .models import cognitoidentity_backends
class CognitoIdentityResponse(BaseResponse):
def create_identity_pool(self):
identity_pool_name = self._get_param('IdentityPoolName')
allow_unauthenticated_identities = self._get_param('AllowUnauthenticatedIdentities')
supported_login_providers = self._get_param('SupportedLoginProviders')
developer_provider_name = self._get_param('DeveloperProviderName')
open_id_connect_provider_arns = self._get_param('OpenIdConnectProviderARNs')
cognito_identity_providers = self._get_param('CognitoIdentityProviders')
saml_provider_arns = self._get_param('SamlProviderARNs')
return cognitoidentity_backends[self.region].create_identity_pool(
identity_pool_name=identity_pool_name,
allow_unauthenticated_identities=allow_unauthenticated_identities,
supported_login_providers=supported_login_providers,
developer_provider_name=developer_provider_name,
open_id_connect_provider_arns=open_id_connect_provider_arns,
cognito_identity_providers=cognito_identity_providers,
saml_provider_arns=saml_provider_arns)
def get_id(self):
return cognitoidentity_backends[self.region].get_id()
def get_credentials_for_identity(self):
return cognitoidentity_backends[self.region].get_credentials_for_identity(self._get_param('IdentityId'))
def get_open_id_token_for_developer_identity(self):
return cognitoidentity_backends[self.region].get_open_id_token_for_developer_identity(self._get_param('IdentityId'))

View File

@ -0,0 +1,10 @@
from __future__ import unicode_literals
from .responses import CognitoIdentityResponse
url_bases = [
"https?://cognito-identity.(.+).amazonaws.com",
]
url_paths = {
'{0}/$': CognitoIdentityResponse.dispatch,
}

View File

@ -0,0 +1,5 @@
from moto.core.utils import get_random_hex
def get_random_identity_id(region):
return "{0}:{0}".format(region, get_random_hex(length=19))

View File

@ -9,7 +9,7 @@ import re
import six import six
from moto import settings from moto import settings
from moto.packages.responses import responses import responses
from moto.packages.httpretty import HTTPretty from moto.packages.httpretty import HTTPretty
from .utils import ( from .utils import (
convert_httpretty_response, convert_httpretty_response,
@ -124,31 +124,102 @@ RESPONSES_METHODS = [responses.GET, responses.DELETE, responses.HEAD,
responses.OPTIONS, responses.PATCH, responses.POST, responses.PUT] responses.OPTIONS, responses.PATCH, responses.POST, responses.PUT]
class ResponsesMockAWS(BaseMockAWS): class CallbackResponse(responses.CallbackResponse):
'''
Need to subclass so we can change a couple things
'''
def get_response(self, request):
'''
Need to override this so we can pass decode_content=False
'''
headers = self.get_headers()
result = self.callback(request)
if isinstance(result, Exception):
raise result
status, r_headers, body = result
body = responses._handle_body(body)
headers.update(r_headers)
return responses.HTTPResponse(
status=status,
reason=six.moves.http_client.responses.get(status),
body=body,
headers=headers,
preload_content=False,
# Need to not decode_content to mimic requests
decode_content=False,
)
def _url_matches(self, url, other, match_querystring=False):
'''
Need to override this so we can fix querystrings breaking regex matching
'''
if not match_querystring:
other = other.split('?', 1)[0]
if responses._is_string(url):
if responses._has_unicode(url):
url = responses._clean_unicode(url)
if not isinstance(other, six.text_type):
other = other.encode('ascii').decode('utf8')
return self._url_matches_strict(url, other)
elif isinstance(url, responses.Pattern) and url.match(other):
return True
else:
return False
botocore_mock = responses.RequestsMock(assert_all_requests_are_fired=False, target='botocore.vendored.requests.adapters.HTTPAdapter.send')
responses_mock = responses._default_mock
class ResponsesMockAWS(BaseMockAWS):
def reset(self): def reset(self):
responses.reset() botocore_mock.reset()
responses_mock.reset()
def enable_patching(self): def enable_patching(self):
responses.start() if not hasattr(botocore_mock, '_patcher') or not hasattr(botocore_mock._patcher, 'target'):
# Check for unactivated patcher
botocore_mock.start()
if not hasattr(responses_mock, '_patcher') or not hasattr(responses_mock._patcher, 'target'):
responses_mock.start()
for method in RESPONSES_METHODS: for method in RESPONSES_METHODS:
for backend in self.backends_for_urls.values(): for backend in self.backends_for_urls.values():
for key, value in backend.urls.items(): for key, value in backend.urls.items():
responses.add_callback( responses_mock.add(
method=method, CallbackResponse(
url=re.compile(key), method=method,
callback=convert_flask_to_responses_response(value), url=re.compile(key),
callback=convert_flask_to_responses_response(value),
stream=True,
match_querystring=False,
)
)
botocore_mock.add(
CallbackResponse(
method=method,
url=re.compile(key),
callback=convert_flask_to_responses_response(value),
stream=True,
match_querystring=False,
)
) )
for pattern in responses.mock._urls:
pattern['stream'] = True
def disable_patching(self): def disable_patching(self):
try: try:
responses.stop() botocore_mock.stop()
except AttributeError: except RuntimeError:
pass
try:
responses_mock.stop()
except RuntimeError:
pass pass
responses.reset()
MockAWS = ResponsesMockAWS MockAWS = ResponsesMockAWS

View File

@ -108,6 +108,7 @@ class BaseResponse(_TemplateEnvironmentMixin):
# to extract region, use [^.] # to extract region, use [^.]
region_regex = re.compile(r'\.(?P<region>[a-z]{2}-[a-z]+-\d{1})\.amazonaws\.com') region_regex = re.compile(r'\.(?P<region>[a-z]{2}-[a-z]+-\d{1})\.amazonaws\.com')
param_list_regex = re.compile(r'(.*)\.(\d+)\.') param_list_regex = re.compile(r'(.*)\.(\d+)\.')
access_key_regex = re.compile(r'AWS.*(?P<access_key>(?<![A-Z0-9])[A-Z0-9]{20}(?![A-Z0-9]))[:/]')
aws_service_spec = None aws_service_spec = None
@classmethod @classmethod
@ -178,6 +179,21 @@ class BaseResponse(_TemplateEnvironmentMixin):
region = self.default_region region = self.default_region
return region return region
def get_current_user(self):
"""
Returns the access key id used in this request as the current user id
"""
if 'Authorization' in self.headers:
match = self.access_key_regex.search(self.headers['Authorization'])
if match:
return match.group(1)
if self.querystring.get('AWSAccessKeyId'):
return self.querystring.get('AWSAccessKeyId')
else:
# Should we raise an unauthorized exception instead?
return '111122223333'
def _dispatch(self, request, full_url, headers): def _dispatch(self, request, full_url, headers):
self.setup_class(request, full_url, headers) self.setup_class(request, full_url, headers)
return self.call_action() return self.call_action()
@ -272,6 +288,9 @@ class BaseResponse(_TemplateEnvironmentMixin):
headers['status'] = str(headers['status']) headers['status'] = str(headers['status'])
return status, headers, body return status, headers, body
if not action:
return 404, headers, ''
raise NotImplementedError( raise NotImplementedError(
"The {0} action has not been implemented".format(action)) "The {0} action has not been implemented".format(action))
@ -326,6 +345,10 @@ class BaseResponse(_TemplateEnvironmentMixin):
if is_tracked(name) or not name.startswith(param_prefix): if is_tracked(name) or not name.startswith(param_prefix):
continue continue
if len(name) > len(param_prefix) and \
not name[len(param_prefix):].startswith('.'):
continue
match = self.param_list_regex.search(name[len(param_prefix):]) if len(name) > len(param_prefix) else None match = self.param_list_regex.search(name[len(param_prefix):]) if len(name) > len(param_prefix) else None
if match: if match:
prefix = param_prefix + match.group(1) prefix = param_prefix + match.group(1)
@ -469,6 +492,54 @@ class BaseResponse(_TemplateEnvironmentMixin):
return results return results
def _get_object_map(self, prefix, name='Name', value='Value'):
"""
Given a query dict like
{
Prefix.1.Name: [u'event'],
Prefix.1.Value.StringValue: [u'order_cancelled'],
Prefix.1.Value.DataType: [u'String'],
Prefix.2.Name: [u'store'],
Prefix.2.Value.StringValue: [u'example_corp'],
Prefix.2.Value.DataType [u'String'],
}
returns
{
'event': {
'DataType': 'String',
'StringValue': 'example_corp'
},
'store': {
'DataType': 'String',
'StringValue': 'order_cancelled'
}
}
"""
object_map = {}
index = 1
while True:
# Loop through looking for keys representing object name
name_key = '{0}.{1}.{2}'.format(prefix, index, name)
obj_name = self.querystring.get(name_key)
if not obj_name:
# Found all keys
break
obj = {}
value_key_prefix = '{0}.{1}.{2}.'.format(
prefix, index, value)
for k, v in self.querystring.items():
if k.startswith(value_key_prefix):
_, value_key = k.split(value_key_prefix, 1)
obj[value_key] = v[0]
object_map[obj_name[0]] = obj
index += 1
return object_map
@property @property
def request_json(self): def request_json(self):
return 'JSON' in self.querystring.get('ContentType', []) return 'JSON' in self.querystring.get('ContentType', [])
@ -551,7 +622,7 @@ class AWSServiceSpec(object):
def __init__(self, path): def __init__(self, path):
self.path = resource_filename('botocore', path) self.path = resource_filename('botocore', path)
with open(self.path) as f: with open(self.path, "rb") as f:
spec = json.load(f) spec = json.load(f)
self.metadata = spec['metadata'] self.metadata = spec['metadata']
self.operations = spec['operations'] self.operations = spec['operations']

View File

@ -18,6 +18,8 @@ def camelcase_to_underscores(argument):
python underscore variable like the_new_attribute''' python underscore variable like the_new_attribute'''
result = '' result = ''
prev_char_title = True prev_char_title = True
if not argument:
return argument
for index, char in enumerate(argument): for index, char in enumerate(argument):
try: try:
next_char_title = argument[index + 1].istitle() next_char_title = argument[index + 1].istitle()

View File

@ -176,6 +176,8 @@ def get_filter_expression(expr, names, values):
next_token = six.next(token_iterator) next_token = six.next(token_iterator)
while next_token != ')': while next_token != ')':
if next_token in values_map:
next_token = values_map[next_token]
function_list.append(next_token) function_list.append(next_token)
next_token = six.next(token_iterator) next_token = six.next(token_iterator)

View File

@ -135,7 +135,9 @@ class Item(BaseModel):
assert len(parts) % 2 == 0, "Mismatched operators and values in update expression: '{}'".format(update_expression) assert len(parts) % 2 == 0, "Mismatched operators and values in update expression: '{}'".format(update_expression)
for action, valstr in zip(parts[:-1:2], parts[1::2]): for action, valstr in zip(parts[:-1:2], parts[1::2]):
action = action.upper() action = action.upper()
values = valstr.split(',')
# "Should" retain arguments in side (...)
values = re.split(r',(?![^(]*\))', valstr)
for value in values: for value in values:
# A Real value # A Real value
value = value.lstrip(":").rstrip(",").strip() value = value.lstrip(":").rstrip(",").strip()
@ -145,9 +147,23 @@ class Item(BaseModel):
if action == "REMOVE": if action == "REMOVE":
self.attrs.pop(value, None) self.attrs.pop(value, None)
elif action == 'SET': elif action == 'SET':
key, value = value.split("=") key, value = value.split("=", 1)
key = key.strip() key = key.strip()
value = value.strip() value = value.strip()
# If not exists, changes value to a default if needed, else its the same as it was
if value.startswith('if_not_exists'):
# Function signature
match = re.match(r'.*if_not_exists\((?P<path>.+),\s*(?P<default>.+)\).*', value)
if not match:
raise TypeError
path, value = match.groups()
# If it already exists, get its value so we dont overwrite it
if path in self.attrs:
value = self.attrs[path].cast_value
if value in expression_attribute_values: if value in expression_attribute_values:
value = DynamoType(expression_attribute_values[value]) value = DynamoType(expression_attribute_values[value])
else: else:
@ -520,14 +536,6 @@ class Table(BaseModel):
else: else:
results.sort(key=lambda item: item.range_key) results.sort(key=lambda item: item.range_key)
if projection_expression:
expressions = [x.strip() for x in projection_expression.split(',')]
results = copy.deepcopy(results)
for result in results:
for attr in list(result.attrs):
if attr not in expressions:
result.attrs.pop(attr)
if scan_index_forward is False: if scan_index_forward is False:
results.reverse() results.reverse()
@ -536,6 +544,14 @@ class Table(BaseModel):
if filter_expression is not None: if filter_expression is not None:
results = [item for item in results if filter_expression.expr(item)] results = [item for item in results if filter_expression.expr(item)]
if projection_expression:
expressions = [x.strip() for x in projection_expression.split(',')]
results = copy.deepcopy(results)
for result in results:
for attr in list(result.attrs):
if attr not in expressions:
result.attrs.pop(attr)
results, last_evaluated_key = self._trim_results(results, limit, results, last_evaluated_key = self._trim_results(results, limit,
exclusive_start_key) exclusive_start_key)
return results, scanned_count, last_evaluated_key return results, scanned_count, last_evaluated_key

View File

@ -8,6 +8,18 @@ from moto.core.utils import camelcase_to_underscores, amzn_request_id
from .models import dynamodb_backends, dynamo_json_dump from .models import dynamodb_backends, dynamo_json_dump
def has_empty_keys_or_values(_dict):
if _dict == "":
return True
if not isinstance(_dict, dict):
return False
return any(
key == '' or value == '' or
has_empty_keys_or_values(value)
for key, value in _dict.items()
)
class DynamoHandler(BaseResponse): class DynamoHandler(BaseResponse):
def get_endpoint_name(self, headers): def get_endpoint_name(self, headers):
@ -161,8 +173,7 @@ class DynamoHandler(BaseResponse):
name = self.body['TableName'] name = self.body['TableName']
item = self.body['Item'] item = self.body['Item']
res = re.search('\"\"', json.dumps(item)) if has_empty_keys_or_values(item):
if res:
er = 'com.amazonaws.dynamodb.v20111205#ValidationException' er = 'com.amazonaws.dynamodb.v20111205#ValidationException'
return (400, return (400,
{'server': 'amazon.com'}, {'server': 'amazon.com'},

View File

@ -280,6 +280,15 @@ class InvalidAssociationIdError(EC2ClientError):
.format(association_id)) .format(association_id))
class InvalidVpcCidrBlockAssociationIdError(EC2ClientError):
def __init__(self, association_id):
super(InvalidVpcCidrBlockAssociationIdError, self).__init__(
"InvalidVpcCidrBlockAssociationIdError.NotFound",
"The vpc CIDR block association ID '{0}' does not exist"
.format(association_id))
class InvalidVPCPeeringConnectionIdError(EC2ClientError): class InvalidVPCPeeringConnectionIdError(EC2ClientError):
def __init__(self, vpc_peering_connection_id): def __init__(self, vpc_peering_connection_id):
@ -392,3 +401,22 @@ class FilterNotImplementedError(MotoNotImplementedError):
super(FilterNotImplementedError, self).__init__( super(FilterNotImplementedError, self).__init__(
"The filter '{0}' for {1}".format( "The filter '{0}' for {1}".format(
filter_name, method_name)) filter_name, method_name))
class CidrLimitExceeded(EC2ClientError):
def __init__(self, vpc_id, max_cidr_limit):
super(CidrLimitExceeded, self).__init__(
"CidrLimitExceeded",
"This network '{0}' has met its maximum number of allowed CIDRs: {1}".format(vpc_id, max_cidr_limit)
)
class OperationNotPermitted(EC2ClientError):
def __init__(self, association_id):
super(OperationNotPermitted, self).__init__(
"OperationNotPermitted",
"The vpc CIDR block with association ID {} may not be disassociated. "
"It is the primary IPv4 CIDR block of the VPC".format(association_id)
)

View File

@ -24,51 +24,54 @@ from moto.core import BaseBackend
from moto.core.models import Model, BaseModel from moto.core.models import Model, BaseModel
from moto.core.utils import iso_8601_datetime_with_milliseconds, camelcase_to_underscores from moto.core.utils import iso_8601_datetime_with_milliseconds, camelcase_to_underscores
from .exceptions import ( from .exceptions import (
EC2ClientError, CidrLimitExceeded,
DependencyViolationError, DependencyViolationError,
MissingParameterError, EC2ClientError,
FilterNotImplementedError,
GatewayNotAttachedError,
InvalidAddressError,
InvalidAllocationIdError,
InvalidAMIIdError,
InvalidAMIAttributeItemValueError,
InvalidAssociationIdError,
InvalidCIDRSubnetError,
InvalidCustomerGatewayIdError,
InvalidDHCPOptionsIdError,
InvalidDomainError,
InvalidID,
InvalidInstanceIdError,
InvalidInternetGatewayIdError,
InvalidKeyPairDuplicateError,
InvalidKeyPairNameError,
InvalidNetworkAclIdError,
InvalidNetworkAttachmentIdError,
InvalidNetworkInterfaceIdError,
InvalidParameterValueError, InvalidParameterValueError,
InvalidParameterValueErrorTagNull, InvalidParameterValueErrorTagNull,
InvalidDHCPOptionsIdError,
MalformedDHCPOptionsIdError,
InvalidKeyPairNameError,
InvalidKeyPairDuplicateError,
InvalidInternetGatewayIdError,
GatewayNotAttachedError,
ResourceAlreadyAssociatedError,
InvalidVPCIdError,
InvalidSubnetIdError,
InvalidNetworkInterfaceIdError,
InvalidNetworkAttachmentIdError,
InvalidSecurityGroupDuplicateError,
InvalidSecurityGroupNotFoundError,
InvalidPermissionNotFoundError, InvalidPermissionNotFoundError,
InvalidPermissionDuplicateError, InvalidPermissionDuplicateError,
InvalidRouteTableIdError, InvalidRouteTableIdError,
InvalidRouteError, InvalidRouteError,
InvalidInstanceIdError, InvalidSecurityGroupDuplicateError,
InvalidAMIIdError, InvalidSecurityGroupNotFoundError,
InvalidAMIAttributeItemValueError,
InvalidSnapshotIdError, InvalidSnapshotIdError,
InvalidSubnetIdError,
InvalidVolumeIdError, InvalidVolumeIdError,
InvalidVolumeAttachmentError, InvalidVolumeAttachmentError,
InvalidDomainError, InvalidVpcCidrBlockAssociationIdError,
InvalidAddressError,
InvalidAllocationIdError,
InvalidAssociationIdError,
InvalidVPCPeeringConnectionIdError, InvalidVPCPeeringConnectionIdError,
InvalidVPCPeeringConnectionStateTransitionError, InvalidVPCPeeringConnectionStateTransitionError,
TagLimitExceeded, InvalidVPCIdError,
InvalidID,
InvalidCIDRSubnetError,
InvalidNetworkAclIdError,
InvalidVpnGatewayIdError, InvalidVpnGatewayIdError,
InvalidVpnConnectionIdError, InvalidVpnConnectionIdError,
InvalidCustomerGatewayIdError, MalformedAMIIdError,
RulesPerSecurityGroupLimitExceededError, MalformedDHCPOptionsIdError,
MissingParameterError,
MotoNotImplementedError, MotoNotImplementedError,
FilterNotImplementedError OperationNotPermitted,
) ResourceAlreadyAssociatedError,
RulesPerSecurityGroupLimitExceededError,
TagLimitExceeded)
from .utils import ( from .utils import (
EC2_RESOURCE_TO_PREFIX, EC2_RESOURCE_TO_PREFIX,
EC2_PREFIX_TO_RESOURCE, EC2_PREFIX_TO_RESOURCE,
@ -81,6 +84,7 @@ from .utils import (
random_instance_id, random_instance_id,
random_internet_gateway_id, random_internet_gateway_id,
random_ip, random_ip,
random_ipv6_cidr,
random_nat_gateway_id, random_nat_gateway_id,
random_key_pair, random_key_pair,
random_private_ip, random_private_ip,
@ -97,6 +101,7 @@ from .utils import (
random_subnet_association_id, random_subnet_association_id,
random_volume_id, random_volume_id,
random_vpc_id, random_vpc_id,
random_vpc_cidr_association_id,
random_vpc_peering_connection_id, random_vpc_peering_connection_id,
generic_filter, generic_filter,
is_valid_resource_id, is_valid_resource_id,
@ -1031,12 +1036,11 @@ class TagBackend(object):
class Ami(TaggedEC2Resource): class Ami(TaggedEC2Resource):
def __init__(self, ec2_backend, ami_id, instance=None, source_ami=None, def __init__(self, ec2_backend, ami_id, instance=None, source_ami=None,
name=None, description=None, owner_id=None, name=None, description=None, owner_id=111122223333,
public=False, virtualization_type=None, architecture=None, public=False, virtualization_type=None, architecture=None,
state='available', creation_date=None, platform=None, state='available', creation_date=None, platform=None,
image_type='machine', image_location=None, hypervisor=None, image_type='machine', image_location=None, hypervisor=None,
root_device_type=None, root_device_name=None, sriov='simple', root_device_type='standard', root_device_name='/dev/sda1', sriov='simple',
region_name='us-east-1a' region_name='us-east-1a'
): ):
self.ec2_backend = ec2_backend self.ec2_backend = ec2_backend
@ -1089,7 +1093,8 @@ class Ami(TaggedEC2Resource):
# AWS auto-creates these, we should reflect the same. # AWS auto-creates these, we should reflect the same.
volume = self.ec2_backend.create_volume(15, region_name) volume = self.ec2_backend.create_volume(15, region_name)
self.ebs_snapshot = self.ec2_backend.create_snapshot( self.ebs_snapshot = self.ec2_backend.create_snapshot(
volume.id, "Auto-created snapshot for AMI %s" % self.id) volume.id, "Auto-created snapshot for AMI %s" % self.id, owner_id)
self.ec2_backend.delete_volume(volume.id)
@property @property
def is_public(self): def is_public(self):
@ -1122,6 +1127,9 @@ class Ami(TaggedEC2Resource):
class AmiBackend(object): class AmiBackend(object):
AMI_REGEX = re.compile("ami-[a-z0-9]+")
def __init__(self): def __init__(self):
self.amis = {} self.amis = {}
@ -1134,12 +1142,14 @@ class AmiBackend(object):
ami_id = ami['ami_id'] ami_id = ami['ami_id']
self.amis[ami_id] = Ami(self, **ami) self.amis[ami_id] = Ami(self, **ami)
def create_image(self, instance_id, name=None, description=None, owner_id=None): def create_image(self, instance_id, name=None, description=None, context=None):
# TODO: check that instance exists and pull info from it. # TODO: check that instance exists and pull info from it.
ami_id = random_ami_id() ami_id = random_ami_id()
instance = self.get_instance(instance_id) instance = self.get_instance(instance_id)
ami = Ami(self, ami_id, instance=instance, source_ami=None, ami = Ami(self, ami_id, instance=instance, source_ami=None,
name=name, description=description, owner_id=owner_id) name=name, description=description,
owner_id=context.get_current_user() if context else '111122223333')
self.amis[ami_id] = ami self.amis[ami_id] = ami
return ami return ami
@ -1152,28 +1162,43 @@ class AmiBackend(object):
self.amis[ami_id] = ami self.amis[ami_id] = ami
return ami return ami
def describe_images(self, ami_ids=(), filters=None, exec_users=None, owners=None): def describe_images(self, ami_ids=(), filters=None, exec_users=None, owners=None,
context=None):
images = self.amis.values() images = self.amis.values()
# Limit images by launch permissions if len(ami_ids):
if exec_users: # boto3 seems to default to just searching based on ami ids if that parameter is passed
tmp_images = [] # and if no images are found, it raises an errors
for ami in images: malformed_ami_ids = [ami_id for ami_id in ami_ids if not ami_id.startswith('ami-')]
for user_id in exec_users: if malformed_ami_ids:
if user_id in ami.launch_permission_users: raise MalformedAMIIdError(malformed_ami_ids)
tmp_images.append(ami)
images = tmp_images
# Limit by owner ids
if owners:
images = [ami for ami in images if ami.owner_id in owners]
if ami_ids:
images = [ami for ami in images if ami.id in ami_ids] images = [ami for ami in images if ami.id in ami_ids]
if len(images) == 0:
raise InvalidAMIIdError(ami_ids)
else:
# Limit images by launch permissions
if exec_users:
tmp_images = []
for ami in images:
for user_id in exec_users:
if user_id in ami.launch_permission_users:
tmp_images.append(ami)
images = tmp_images
# Limit by owner ids
if owners:
# support filtering by Owners=['self']
owners = list(map(
lambda o: context.get_current_user()
if context and o == 'self' else o,
owners))
images = [ami for ami in images if ami.owner_id in owners]
# Generic filters
if filters:
return generic_filter(filters, images)
# Generic filters
if filters:
return generic_filter(filters, images)
return images return images
def deregister_image(self, ami_id): def deregister_image(self, ami_id):
@ -1251,8 +1276,15 @@ class RegionsAndZonesBackend(object):
(region, [Zone(region + c, region) for c in 'abc']) (region, [Zone(region + c, region) for c in 'abc'])
for region in [r.name for r in regions]) for region in [r.name for r in regions])
def describe_regions(self): def describe_regions(self, region_names=[]):
return self.regions if len(region_names) == 0:
return self.regions
ret = []
for name in region_names:
for region in self.regions:
if region.name == name:
ret.append(region)
return ret
def describe_availability_zones(self): def describe_availability_zones(self):
return self.zones[self.region_name] return self.zones[self.region_name]
@ -1683,6 +1715,7 @@ class SecurityGroupIngress(object):
group_id = properties.get('GroupId') group_id = properties.get('GroupId')
ip_protocol = properties.get("IpProtocol") ip_protocol = properties.get("IpProtocol")
cidr_ip = properties.get("CidrIp") cidr_ip = properties.get("CidrIp")
cidr_ipv6 = properties.get("CidrIpv6")
from_port = properties.get("FromPort") from_port = properties.get("FromPort")
source_security_group_id = properties.get("SourceSecurityGroupId") source_security_group_id = properties.get("SourceSecurityGroupId")
source_security_group_name = properties.get("SourceSecurityGroupName") source_security_group_name = properties.get("SourceSecurityGroupName")
@ -1691,7 +1724,7 @@ class SecurityGroupIngress(object):
to_port = properties.get("ToPort") to_port = properties.get("ToPort")
assert group_id or group_name assert group_id or group_name
assert source_security_group_name or cidr_ip or source_security_group_id assert source_security_group_name or cidr_ip or cidr_ipv6 or source_security_group_id
assert ip_protocol assert ip_protocol
if source_security_group_id: if source_security_group_id:
@ -1807,13 +1840,15 @@ class Volume(TaggedEC2Resource):
return self.id return self.id
elif filter_name == 'encrypted': elif filter_name == 'encrypted':
return str(self.encrypted).lower() return str(self.encrypted).lower()
elif filter_name == 'availability-zone':
return self.zone.name
else: else:
return super(Volume, self).get_filter_value( return super(Volume, self).get_filter_value(
filter_name, 'DescribeVolumes') filter_name, 'DescribeVolumes')
class Snapshot(TaggedEC2Resource): class Snapshot(TaggedEC2Resource):
def __init__(self, ec2_backend, snapshot_id, volume, description, encrypted=False): def __init__(self, ec2_backend, snapshot_id, volume, description, encrypted=False, owner_id='123456789012'):
self.id = snapshot_id self.id = snapshot_id
self.volume = volume self.volume = volume
self.description = description self.description = description
@ -1822,6 +1857,7 @@ class Snapshot(TaggedEC2Resource):
self.ec2_backend = ec2_backend self.ec2_backend = ec2_backend
self.status = 'completed' self.status = 'completed'
self.encrypted = encrypted self.encrypted = encrypted
self.owner_id = owner_id
def get_filter_value(self, filter_name): def get_filter_value(self, filter_name):
if filter_name == 'description': if filter_name == 'description':
@ -1913,11 +1949,13 @@ class EBSBackend(object):
volume.attachment = None volume.attachment = None
return old_attachment return old_attachment
def create_snapshot(self, volume_id, description): def create_snapshot(self, volume_id, description, owner_id=None):
snapshot_id = random_snapshot_id() snapshot_id = random_snapshot_id()
volume = self.get_volume(volume_id) volume = self.get_volume(volume_id)
snapshot = Snapshot(self, snapshot_id, volume, params = [self, snapshot_id, volume, description, volume.encrypted]
description, volume.encrypted) if owner_id:
params.append(owner_id)
snapshot = Snapshot(*params)
self.snapshots[snapshot_id] = snapshot self.snapshots[snapshot_id] = snapshot
return snapshot return snapshot
@ -1933,6 +1971,15 @@ class EBSBackend(object):
matches = generic_filter(filters, matches) matches = generic_filter(filters, matches)
return matches return matches
def copy_snapshot(self, source_snapshot_id, source_region, description=None):
source_snapshot = ec2_backends[source_region].describe_snapshots(
snapshot_ids=[source_snapshot_id])[0]
snapshot_id = random_snapshot_id()
snapshot = Snapshot(self, snapshot_id, volume=source_snapshot.volume,
description=description, encrypted=source_snapshot.encrypted)
self.snapshots[snapshot_id] = snapshot
return snapshot
def get_snapshot(self, snapshot_id): def get_snapshot(self, snapshot_id):
snapshot = self.snapshots.get(snapshot_id, None) snapshot = self.snapshots.get(snapshot_id, None)
if not snapshot: if not snapshot:
@ -1972,10 +2019,13 @@ class EBSBackend(object):
class VPC(TaggedEC2Resource): class VPC(TaggedEC2Resource):
def __init__(self, ec2_backend, vpc_id, cidr_block, is_default, instance_tenancy='default'): def __init__(self, ec2_backend, vpc_id, cidr_block, is_default, instance_tenancy='default',
amazon_provided_ipv6_cidr_block=False):
self.ec2_backend = ec2_backend self.ec2_backend = ec2_backend
self.id = vpc_id self.id = vpc_id
self.cidr_block = cidr_block self.cidr_block = cidr_block
self.cidr_block_association_set = {}
self.dhcp_options = None self.dhcp_options = None
self.state = 'available' self.state = 'available'
self.instance_tenancy = instance_tenancy self.instance_tenancy = instance_tenancy
@ -1985,6 +2035,10 @@ class VPC(TaggedEC2Resource):
# or VPCs created using the wizard of the VPC console # or VPCs created using the wizard of the VPC console
self.enable_dns_hostnames = 'true' if is_default else 'false' self.enable_dns_hostnames = 'true' if is_default else 'false'
self.associate_vpc_cidr_block(cidr_block)
if amazon_provided_ipv6_cidr_block:
self.associate_vpc_cidr_block(cidr_block, amazon_provided_ipv6_cidr_block=amazon_provided_ipv6_cidr_block)
@classmethod @classmethod
def create_from_cloudformation_json(cls, resource_name, cloudformation_json, region_name): def create_from_cloudformation_json(cls, resource_name, cloudformation_json, region_name):
properties = cloudformation_json['Properties'] properties = cloudformation_json['Properties']
@ -1994,6 +2048,11 @@ class VPC(TaggedEC2Resource):
cidr_block=properties['CidrBlock'], cidr_block=properties['CidrBlock'],
instance_tenancy=properties.get('InstanceTenancy', 'default') instance_tenancy=properties.get('InstanceTenancy', 'default')
) )
for tag in properties.get("Tags", []):
tag_key = tag["Key"]
tag_value = tag["Value"]
vpc.add_tag(tag_key, tag_value)
return vpc return vpc
@property @property
@ -2005,6 +2064,12 @@ class VPC(TaggedEC2Resource):
return self.id return self.id
elif filter_name in ('cidr', 'cidr-block', 'cidrBlock'): elif filter_name in ('cidr', 'cidr-block', 'cidrBlock'):
return self.cidr_block return self.cidr_block
elif filter_name in ('cidr-block-association.cidr-block', 'ipv6-cidr-block-association.ipv6-cidr-block'):
return [c['cidr_block'] for c in self.get_cidr_block_association_set(ipv6='ipv6' in filter_name)]
elif filter_name in ('cidr-block-association.association-id', 'ipv6-cidr-block-association.association-id'):
return self.cidr_block_association_set.keys()
elif filter_name in ('cidr-block-association.state', 'ipv6-cidr-block-association.state'):
return [c['cidr_block_state']['state'] for c in self.get_cidr_block_association_set(ipv6='ipv6' in filter_name)]
elif filter_name in ('instance_tenancy', 'InstanceTenancy'): elif filter_name in ('instance_tenancy', 'InstanceTenancy'):
return self.instance_tenancy return self.instance_tenancy
elif filter_name in ('is-default', 'isDefault'): elif filter_name in ('is-default', 'isDefault'):
@ -2016,8 +2081,37 @@ class VPC(TaggedEC2Resource):
return None return None
return self.dhcp_options.id return self.dhcp_options.id
else: else:
return super(VPC, self).get_filter_value( return super(VPC, self).get_filter_value(filter_name, 'DescribeVpcs')
filter_name, 'DescribeVpcs')
def associate_vpc_cidr_block(self, cidr_block, amazon_provided_ipv6_cidr_block=False):
max_associations = 5 if not amazon_provided_ipv6_cidr_block else 1
if len(self.get_cidr_block_association_set(amazon_provided_ipv6_cidr_block)) >= max_associations:
raise CidrLimitExceeded(self.id, max_associations)
association_id = random_vpc_cidr_association_id()
association_set = {
'association_id': association_id,
'cidr_block_state': {'state': 'associated', 'StatusMessage': ''}
}
association_set['cidr_block'] = random_ipv6_cidr() if amazon_provided_ipv6_cidr_block else cidr_block
self.cidr_block_association_set[association_id] = association_set
return association_set
def disassociate_vpc_cidr_block(self, association_id):
if self.cidr_block == self.cidr_block_association_set.get(association_id, {}).get('cidr_block'):
raise OperationNotPermitted(association_id)
response = self.cidr_block_association_set.pop(association_id, {})
if response:
response['vpc_id'] = self.id
response['cidr_block_state']['state'] = 'disassociating'
return response
def get_cidr_block_association_set(self, ipv6=False):
return [c for c in self.cidr_block_association_set.values() if ('::/' if ipv6 else '.') in c.get('cidr_block')]
class VPCBackend(object): class VPCBackend(object):
@ -2025,10 +2119,9 @@ class VPCBackend(object):
self.vpcs = {} self.vpcs = {}
super(VPCBackend, self).__init__() super(VPCBackend, self).__init__()
def create_vpc(self, cidr_block, instance_tenancy='default'): def create_vpc(self, cidr_block, instance_tenancy='default', amazon_provided_ipv6_cidr_block=False):
vpc_id = random_vpc_id() vpc_id = random_vpc_id()
vpc = VPC(self, vpc_id, cidr_block, len( vpc = VPC(self, vpc_id, cidr_block, len(self.vpcs) == 0, instance_tenancy, amazon_provided_ipv6_cidr_block)
self.vpcs) == 0, instance_tenancy)
self.vpcs[vpc_id] = vpc self.vpcs[vpc_id] = vpc
# AWS creates a default main route table and security group. # AWS creates a default main route table and security group.
@ -2101,6 +2194,18 @@ class VPCBackend(object):
else: else:
raise InvalidParameterValueError(attr_name) raise InvalidParameterValueError(attr_name)
def disassociate_vpc_cidr_block(self, association_id):
for vpc in self.vpcs.values():
response = vpc.disassociate_vpc_cidr_block(association_id)
if response:
return response
else:
raise InvalidVpcCidrBlockAssociationIdError(association_id)
def associate_vpc_cidr_block(self, vpc_id, cidr_block, amazon_provided_ipv6_cidr_block):
vpc = self.get_vpc(vpc_id)
return vpc.associate_vpc_cidr_block(cidr_block, amazon_provided_ipv6_cidr_block)
class VPCPeeringConnectionStatus(object): class VPCPeeringConnectionStatus(object):
def __init__(self, code='initiating-request', message=''): def __init__(self, code='initiating-request', message=''):
@ -2559,7 +2664,7 @@ class Route(object):
ec2_backend = ec2_backends[region_name] ec2_backend = ec2_backends[region_name]
route_table = ec2_backend.create_route( route_table = ec2_backend.create_route(
route_table_id=route_table_id, route_table_id=route_table_id,
destination_cidr_block=properties['DestinationCidrBlock'], destination_cidr_block=properties.get('DestinationCidrBlock'),
gateway_id=gateway_id, gateway_id=gateway_id,
instance_id=instance_id, instance_id=instance_id,
interface_id=interface_id, interface_id=interface_id,
@ -2912,7 +3017,7 @@ class SpotFleetRequest(TaggedEC2Resource):
'Properties']['SpotFleetRequestConfigData'] 'Properties']['SpotFleetRequestConfigData']
ec2_backend = ec2_backends[region_name] ec2_backend = ec2_backends[region_name]
spot_price = properties['SpotPrice'] spot_price = properties.get('SpotPrice')
target_capacity = properties['TargetCapacity'] target_capacity = properties['TargetCapacity']
iam_fleet_role = properties['IamFleetRole'] iam_fleet_role = properties['IamFleetRole']
allocation_strategy = properties['AllocationStrategy'] allocation_strategy = properties['AllocationStrategy']
@ -2946,7 +3051,8 @@ class SpotFleetRequest(TaggedEC2Resource):
launch_spec_index += 1 launch_spec_index += 1
else: # lowestPrice else: # lowestPrice
cheapest_spec = sorted( cheapest_spec = sorted(
self.launch_specs, key=lambda spec: float(spec.spot_price))[0] # FIXME: change `+inf` to the on demand price scaled to weighted capacity when it's not present
self.launch_specs, key=lambda spec: float(spec.spot_price or '+inf'))[0]
weight_so_far = weight_to_add + (weight_to_add % cheapest_spec.weighted_capacity) weight_so_far = weight_to_add + (weight_to_add % cheapest_spec.weighted_capacity)
weight_map[cheapest_spec] = int( weight_map[cheapest_spec] = int(
weight_so_far // cheapest_spec.weighted_capacity) weight_so_far // cheapest_spec.weighted_capacity)

View File

@ -11,7 +11,7 @@ class AmisResponse(BaseResponse):
instance_id = self._get_param('InstanceId') instance_id = self._get_param('InstanceId')
if self.is_not_dryrun('CreateImage'): if self.is_not_dryrun('CreateImage'):
image = self.ec2_backend.create_image( image = self.ec2_backend.create_image(
instance_id, name, description) instance_id, name, description, context=self)
template = self.response_template(CREATE_IMAGE_RESPONSE) template = self.response_template(CREATE_IMAGE_RESPONSE)
return template.render(image=image) return template.render(image=image)
@ -39,7 +39,8 @@ class AmisResponse(BaseResponse):
owners = self._get_multi_param('Owner') owners = self._get_multi_param('Owner')
exec_users = self._get_multi_param('ExecutableBy') exec_users = self._get_multi_param('ExecutableBy')
images = self.ec2_backend.describe_images( images = self.ec2_backend.describe_images(
ami_ids=ami_ids, filters=filters, exec_users=exec_users, owners=owners) ami_ids=ami_ids, filters=filters, exec_users=exec_users,
owners=owners, context=self)
template = self.response_template(DESCRIBE_IMAGES_RESPONSE) template = self.response_template(DESCRIBE_IMAGES_RESPONSE)
return template.render(images=images) return template.render(images=images)
@ -112,12 +113,12 @@ DESCRIBE_IMAGES_RESPONSE = """<DescribeImagesResponse xmlns="http://ec2.amazonaw
<rootDeviceName>{{ image.root_device_name }}</rootDeviceName> <rootDeviceName>{{ image.root_device_name }}</rootDeviceName>
<blockDeviceMapping> <blockDeviceMapping>
<item> <item>
<deviceName>/dev/sda1</deviceName> <deviceName>{{ image.root_device_name }}</deviceName>
<ebs> <ebs>
<snapshotId>{{ image.ebs_snapshot.id }}</snapshotId> <snapshotId>{{ image.ebs_snapshot.id }}</snapshotId>
<volumeSize>15</volumeSize> <volumeSize>15</volumeSize>
<deleteOnTermination>false</deleteOnTermination> <deleteOnTermination>false</deleteOnTermination>
<volumeType>standard</volumeType> <volumeType>{{ image.root_device_type }}</volumeType>
</ebs> </ebs>
</item> </item>
</blockDeviceMapping> </blockDeviceMapping>

View File

@ -10,7 +10,8 @@ class AvailabilityZonesAndRegions(BaseResponse):
return template.render(zones=zones) return template.render(zones=zones)
def describe_regions(self): def describe_regions(self):
regions = self.ec2_backend.describe_regions() region_names = self._get_multi_param('RegionName')
regions = self.ec2_backend.describe_regions(region_names)
template = self.response_template(DESCRIBE_REGIONS_RESPONSE) template = self.response_template(DESCRIBE_REGIONS_RESPONSE)
return template.render(regions=regions) return template.render(regions=regions)

View File

@ -16,15 +16,23 @@ class ElasticBlockStore(BaseResponse):
return template.render(attachment=attachment) return template.render(attachment=attachment)
def copy_snapshot(self): def copy_snapshot(self):
source_snapshot_id = self._get_param('SourceSnapshotId')
source_region = self._get_param('SourceRegion')
description = self._get_param('Description')
if self.is_not_dryrun('CopySnapshot'): if self.is_not_dryrun('CopySnapshot'):
raise NotImplementedError( snapshot = self.ec2_backend.copy_snapshot(
'ElasticBlockStore.copy_snapshot is not yet implemented') source_snapshot_id, source_region, description)
template = self.response_template(COPY_SNAPSHOT_RESPONSE)
return template.render(snapshot=snapshot)
def create_snapshot(self): def create_snapshot(self):
volume_id = self._get_param('VolumeId') volume_id = self._get_param('VolumeId')
description = self._get_param('Description') description = self._get_param('Description')
tags = self._parse_tag_specification("TagSpecification")
snapshot_tags = tags.get('snapshot', {})
if self.is_not_dryrun('CreateSnapshot'): if self.is_not_dryrun('CreateSnapshot'):
snapshot = self.ec2_backend.create_snapshot(volume_id, description) snapshot = self.ec2_backend.create_snapshot(volume_id, description)
snapshot.add_tags(snapshot_tags)
template = self.response_template(CREATE_SNAPSHOT_RESPONSE) template = self.response_template(CREATE_SNAPSHOT_RESPONSE)
return template.render(snapshot=snapshot) return template.render(snapshot=snapshot)
@ -32,10 +40,13 @@ class ElasticBlockStore(BaseResponse):
size = self._get_param('Size') size = self._get_param('Size')
zone = self._get_param('AvailabilityZone') zone = self._get_param('AvailabilityZone')
snapshot_id = self._get_param('SnapshotId') snapshot_id = self._get_param('SnapshotId')
tags = self._parse_tag_specification("TagSpecification")
volume_tags = tags.get('volume', {})
encrypted = self._get_param('Encrypted', if_none=False) encrypted = self._get_param('Encrypted', if_none=False)
if self.is_not_dryrun('CreateVolume'): if self.is_not_dryrun('CreateVolume'):
volume = self.ec2_backend.create_volume( volume = self.ec2_backend.create_volume(
size, zone, snapshot_id, encrypted) size, zone, snapshot_id, encrypted)
volume.add_tags(volume_tags)
template = self.response_template(CREATE_VOLUME_RESPONSE) template = self.response_template(CREATE_VOLUME_RESPONSE)
return template.render(volume=volume) return template.render(volume=volume)
@ -139,6 +150,16 @@ CREATE_VOLUME_RESPONSE = """<CreateVolumeResponse xmlns="http://ec2.amazonaws.co
<availabilityZone>{{ volume.zone.name }}</availabilityZone> <availabilityZone>{{ volume.zone.name }}</availabilityZone>
<status>creating</status> <status>creating</status>
<createTime>{{ volume.create_time}}</createTime> <createTime>{{ volume.create_time}}</createTime>
<tagSet>
{% for tag in volume.get_tags() %}
<item>
<resourceId>{{ tag.resource_id }}</resourceId>
<resourceType>{{ tag.resource_type }}</resourceType>
<key>{{ tag.key }}</key>
<value>{{ tag.value }}</value>
</item>
{% endfor %}
</tagSet>
<volumeType>standard</volumeType> <volumeType>standard</volumeType>
</CreateVolumeResponse>""" </CreateVolumeResponse>"""
@ -216,12 +237,27 @@ CREATE_SNAPSHOT_RESPONSE = """<CreateSnapshotResponse xmlns="http://ec2.amazonaw
<status>pending</status> <status>pending</status>
<startTime>{{ snapshot.start_time}}</startTime> <startTime>{{ snapshot.start_time}}</startTime>
<progress>60%</progress> <progress>60%</progress>
<ownerId>123456789012</ownerId> <ownerId>{{ snapshot.owner_id }}</ownerId>
<volumeSize>{{ snapshot.volume.size }}</volumeSize> <volumeSize>{{ snapshot.volume.size }}</volumeSize>
<description>{{ snapshot.description }}</description> <description>{{ snapshot.description }}</description>
<encrypted>{{ snapshot.encrypted }}</encrypted> <encrypted>{{ snapshot.encrypted }}</encrypted>
<tagSet>
{% for tag in snapshot.get_tags() %}
<item>
<resourceId>{{ tag.resource_id }}</resourceId>
<resourceType>{{ tag.resource_type }}</resourceType>
<key>{{ tag.key }}</key>
<value>{{ tag.value }}</value>
</item>
{% endfor %}
</tagSet>
</CreateSnapshotResponse>""" </CreateSnapshotResponse>"""
COPY_SNAPSHOT_RESPONSE = """<CopySnapshotResponse xmlns="http://ec2.amazonaws.com/doc/2016-11-15/">
<requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
<snapshotId>{{ snapshot.id }}</snapshotId>
</CopySnapshotResponse>"""
DESCRIBE_SNAPSHOTS_RESPONSE = """<DescribeSnapshotsResponse xmlns="http://ec2.amazonaws.com/doc/2013-10-15/"> DESCRIBE_SNAPSHOTS_RESPONSE = """<DescribeSnapshotsResponse xmlns="http://ec2.amazonaws.com/doc/2013-10-15/">
<requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId> <requestId>59dbff89-35bd-4eac-99ed-be587EXAMPLE</requestId>
<snapshotSet> <snapshotSet>
@ -232,7 +268,7 @@ DESCRIBE_SNAPSHOTS_RESPONSE = """<DescribeSnapshotsResponse xmlns="http://ec2.am
<status>{{ snapshot.status }}</status> <status>{{ snapshot.status }}</status>
<startTime>{{ snapshot.start_time}}</startTime> <startTime>{{ snapshot.start_time}}</startTime>
<progress>100%</progress> <progress>100%</progress>
<ownerId>123456789012</ownerId> <ownerId>{{ snapshot.owner_id }}</ownerId>
<volumeSize>{{ snapshot.volume.size }}</volumeSize> <volumeSize>{{ snapshot.volume.size }}</volumeSize>
<description>{{ snapshot.description }}</description> <description>{{ snapshot.description }}</description>
<encrypted>{{ snapshot.encrypted }}</encrypted> <encrypted>{{ snapshot.encrypted }}</encrypted>

View File

@ -40,7 +40,7 @@ class SpotFleets(BaseResponse):
def request_spot_fleet(self): def request_spot_fleet(self):
spot_config = self._get_dict_param("SpotFleetRequestConfig.") spot_config = self._get_dict_param("SpotFleetRequestConfig.")
spot_price = spot_config['spot_price'] spot_price = spot_config.get('spot_price')
target_capacity = spot_config['target_capacity'] target_capacity = spot_config['target_capacity']
iam_fleet_role = spot_config['iam_fleet_role'] iam_fleet_role = spot_config['iam_fleet_role']
allocation_strategy = spot_config['allocation_strategy'] allocation_strategy = spot_config['allocation_strategy']
@ -78,7 +78,9 @@ DESCRIBE_SPOT_FLEET_TEMPLATE = """<DescribeSpotFleetRequestsResponse xmlns="http
<spotFleetRequestId>{{ request.id }}</spotFleetRequestId> <spotFleetRequestId>{{ request.id }}</spotFleetRequestId>
<spotFleetRequestState>{{ request.state }}</spotFleetRequestState> <spotFleetRequestState>{{ request.state }}</spotFleetRequestState>
<spotFleetRequestConfig> <spotFleetRequestConfig>
{% if request.spot_price %}
<spotPrice>{{ request.spot_price }}</spotPrice> <spotPrice>{{ request.spot_price }}</spotPrice>
{% endif %}
<targetCapacity>{{ request.target_capacity }}</targetCapacity> <targetCapacity>{{ request.target_capacity }}</targetCapacity>
<iamFleetRole>{{ request.iam_fleet_role }}</iamFleetRole> <iamFleetRole>{{ request.iam_fleet_role }}</iamFleetRole>
<allocationStrategy>{{ request.allocation_strategy }}</allocationStrategy> <allocationStrategy>{{ request.allocation_strategy }}</allocationStrategy>
@ -93,7 +95,9 @@ DESCRIBE_SPOT_FLEET_TEMPLATE = """<DescribeSpotFleetRequestsResponse xmlns="http
<iamInstanceProfile><arn>{{ launch_spec.iam_instance_profile }}</arn></iamInstanceProfile> <iamInstanceProfile><arn>{{ launch_spec.iam_instance_profile }}</arn></iamInstanceProfile>
<keyName>{{ launch_spec.key_name }}</keyName> <keyName>{{ launch_spec.key_name }}</keyName>
<monitoring><enabled>{{ launch_spec.monitoring }}</enabled></monitoring> <monitoring><enabled>{{ launch_spec.monitoring }}</enabled></monitoring>
{% if launch_spec.spot_price %}
<spotPrice>{{ launch_spec.spot_price }}</spotPrice> <spotPrice>{{ launch_spec.spot_price }}</spotPrice>
{% endif %}
<userData>{{ launch_spec.user_data }}</userData> <userData>{{ launch_spec.user_data }}</userData>
<weightedCapacity>{{ launch_spec.weighted_capacity }}</weightedCapacity> <weightedCapacity>{{ launch_spec.weighted_capacity }}</weightedCapacity>
<groupSet> <groupSet>

View File

@ -9,9 +9,12 @@ class VPCs(BaseResponse):
def create_vpc(self): def create_vpc(self):
cidr_block = self._get_param('CidrBlock') cidr_block = self._get_param('CidrBlock')
instance_tenancy = self._get_param('InstanceTenancy', if_none='default') instance_tenancy = self._get_param('InstanceTenancy', if_none='default')
vpc = self.ec2_backend.create_vpc(cidr_block, instance_tenancy) amazon_provided_ipv6_cidr_blocks = self._get_param('AmazonProvidedIpv6CidrBlock')
vpc = self.ec2_backend.create_vpc(cidr_block, instance_tenancy,
amazon_provided_ipv6_cidr_block=amazon_provided_ipv6_cidr_blocks)
doc_date = '2013-10-15' if 'Boto/' in self.headers.get('user-agent', '') else '2016-11-15'
template = self.response_template(CREATE_VPC_RESPONSE) template = self.response_template(CREATE_VPC_RESPONSE)
return template.render(vpc=vpc) return template.render(vpc=vpc, doc_date=doc_date)
def delete_vpc(self): def delete_vpc(self):
vpc_id = self._get_param('VpcId') vpc_id = self._get_param('VpcId')
@ -23,8 +26,9 @@ class VPCs(BaseResponse):
vpc_ids = self._get_multi_param('VpcId') vpc_ids = self._get_multi_param('VpcId')
filters = filters_from_querystring(self.querystring) filters = filters_from_querystring(self.querystring)
vpcs = self.ec2_backend.get_all_vpcs(vpc_ids=vpc_ids, filters=filters) vpcs = self.ec2_backend.get_all_vpcs(vpc_ids=vpc_ids, filters=filters)
doc_date = '2013-10-15' if 'Boto/' in self.headers.get('user-agent', '') else '2016-11-15'
template = self.response_template(DESCRIBE_VPCS_RESPONSE) template = self.response_template(DESCRIBE_VPCS_RESPONSE)
return template.render(vpcs=vpcs) return template.render(vpcs=vpcs, doc_date=doc_date)
def describe_vpc_attribute(self): def describe_vpc_attribute(self):
vpc_id = self._get_param('VpcId') vpc_id = self._get_param('VpcId')
@ -45,14 +49,63 @@ class VPCs(BaseResponse):
vpc_id, attr_name, attr_value) vpc_id, attr_name, attr_value)
return MODIFY_VPC_ATTRIBUTE_RESPONSE return MODIFY_VPC_ATTRIBUTE_RESPONSE
def associate_vpc_cidr_block(self):
vpc_id = self._get_param('VpcId')
amazon_provided_ipv6_cidr_blocks = self._get_param('AmazonProvidedIpv6CidrBlock')
# todo test on AWS if can create an association for IPV4 and IPV6 in the same call?
cidr_block = self._get_param('CidrBlock') if not amazon_provided_ipv6_cidr_blocks else None
value = self.ec2_backend.associate_vpc_cidr_block(vpc_id, cidr_block, amazon_provided_ipv6_cidr_blocks)
if not amazon_provided_ipv6_cidr_blocks:
render_template = ASSOCIATE_VPC_CIDR_BLOCK_RESPONSE
else:
render_template = IPV6_ASSOCIATE_VPC_CIDR_BLOCK_RESPONSE
template = self.response_template(render_template)
return template.render(vpc_id=vpc_id, value=value, cidr_block=value['cidr_block'],
association_id=value['association_id'], cidr_block_state='associating')
def disassociate_vpc_cidr_block(self):
association_id = self._get_param('AssociationId')
value = self.ec2_backend.disassociate_vpc_cidr_block(association_id)
if "::" in value.get('cidr_block', ''):
render_template = IPV6_DISASSOCIATE_VPC_CIDR_BLOCK_RESPONSE
else:
render_template = DISASSOCIATE_VPC_CIDR_BLOCK_RESPONSE
template = self.response_template(render_template)
return template.render(vpc_id=value['vpc_id'], cidr_block=value['cidr_block'],
association_id=value['association_id'], cidr_block_state='disassociating')
CREATE_VPC_RESPONSE = """ CREATE_VPC_RESPONSE = """
<CreateVpcResponse xmlns="http://ec2.amazonaws.com/doc/2013-10-15/"> <CreateVpcResponse xmlns="http://ec2.amazonaws.com/doc/{{doc_date}}/">
<requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId> <requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId>
<vpc> <vpc>
<vpcId>{{ vpc.id }}</vpcId> <vpcId>{{ vpc.id }}</vpcId>
<state>pending</state> <state>pending</state>
<cidrBlock>{{ vpc.cidr_block }}</cidrBlock> <cidrBlock>{{ vpc.cidr_block }}</cidrBlock>
{% if doc_date == "2016-11-15" %}
<cidrBlockAssociationSet>
{% for assoc in vpc.get_cidr_block_association_set() %}
<item>
<cidrBlock>{{assoc.cidr_block}}</cidrBlock>
<associationId>{{ assoc.association_id }}</associationId>
<cidrBlockState>
<state>{{assoc.cidr_block_state.state}}</state>
</cidrBlockState>
</item>
{% endfor %}
</cidrBlockAssociationSet>
<ipv6CidrBlockAssociationSet>
{% for assoc in vpc.get_cidr_block_association_set(ipv6=True) %}
<item>
<ipv6CidrBlock>{{assoc.cidr_block}}</ipv6CidrBlock>
<associationId>{{ assoc.association_id }}</associationId>
<ipv6CidrBlockState>
<state>{{assoc.cidr_block_state.state}}</state>
</ipv6CidrBlockState>
</item>
{% endfor %}
</ipv6CidrBlockAssociationSet>
{% endif %}
<dhcpOptionsId>{% if vpc.dhcp_options %}{{ vpc.dhcp_options.id }}{% else %}dopt-1a2b3c4d2{% endif %}</dhcpOptionsId> <dhcpOptionsId>{% if vpc.dhcp_options %}{{ vpc.dhcp_options.id }}{% else %}dopt-1a2b3c4d2{% endif %}</dhcpOptionsId>
<instanceTenancy>{{ vpc.instance_tenancy }}</instanceTenancy> <instanceTenancy>{{ vpc.instance_tenancy }}</instanceTenancy>
<tagSet> <tagSet>
@ -69,14 +122,38 @@ CREATE_VPC_RESPONSE = """
</CreateVpcResponse>""" </CreateVpcResponse>"""
DESCRIBE_VPCS_RESPONSE = """ DESCRIBE_VPCS_RESPONSE = """
<DescribeVpcsResponse xmlns="http://ec2.amazonaws.com/doc/2013-10-15/"> <DescribeVpcsResponse xmlns="http://ec2.amazonaws.com/doc/{{doc_date}}/">
<requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId> <requestId>7a62c442-3484-4f42-9342-6942EXAMPLE</requestId>
<vpcSet> <vpcSet>
{% for vpc in vpcs %} {% for vpc in vpcs %}
<item> <item>
<vpcId>{{ vpc.id }}</vpcId> <vpcId>{{ vpc.id }}</vpcId>
<state>{{ vpc.state }}</state> <state>{{ vpc.state }}</state>
<cidrBlock>{{ vpc.cidr_block }}</cidrBlock> <cidrBlock>{{ vpc.cidr_block }}</cidrBlock>
{% if doc_date == "2016-11-15" %}
<cidrBlockAssociationSet>
{% for assoc in vpc.get_cidr_block_association_set() %}
<item>
<cidrBlock>{{assoc.cidr_block}}</cidrBlock>
<associationId>{{ assoc.association_id }}</associationId>
<cidrBlockState>
<state>{{assoc.cidr_block_state.state}}</state>
</cidrBlockState>
</item>
{% endfor %}
</cidrBlockAssociationSet>
<ipv6CidrBlockAssociationSet>
{% for assoc in vpc.get_cidr_block_association_set(ipv6=True) %}
<item>
<ipv6CidrBlock>{{assoc.cidr_block}}</ipv6CidrBlock>
<associationId>{{ assoc.association_id }}</associationId>
<ipv6CidrBlockState>
<state>{{assoc.cidr_block_state.state}}</state>
</ipv6CidrBlockState>
</item>
{% endfor %}
</ipv6CidrBlockAssociationSet>
{% endif %}
<dhcpOptionsId>{% if vpc.dhcp_options %}{{ vpc.dhcp_options.id }}{% else %}dopt-7a8b9c2d{% endif %}</dhcpOptionsId> <dhcpOptionsId>{% if vpc.dhcp_options %}{{ vpc.dhcp_options.id }}{% else %}dopt-7a8b9c2d{% endif %}</dhcpOptionsId>
<instanceTenancy>{{ vpc.instance_tenancy }}</instanceTenancy> <instanceTenancy>{{ vpc.instance_tenancy }}</instanceTenancy>
<isDefault>{{ vpc.is_default }}</isDefault> <isDefault>{{ vpc.is_default }}</isDefault>
@ -96,14 +173,14 @@ DESCRIBE_VPCS_RESPONSE = """
</DescribeVpcsResponse>""" </DescribeVpcsResponse>"""
DELETE_VPC_RESPONSE = """ DELETE_VPC_RESPONSE = """
<DeleteVpcResponse xmlns="http://ec2.amazonaws.com/doc/2013-10-15/"> <DeleteVpcResponse xmlns="http://ec2.amazonaws.com/doc/2016-11-15/">
<requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId> <requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId>
<return>true</return> <return>true</return>
</DeleteVpcResponse> </DeleteVpcResponse>
""" """
DESCRIBE_VPC_ATTRIBUTE_RESPONSE = """ DESCRIBE_VPC_ATTRIBUTE_RESPONSE = """
<DescribeVpcAttributeResponse xmlns="http://ec2.amazonaws.com/doc/2013-10-15/"> <DescribeVpcAttributeResponse xmlns="http://ec2.amazonaws.com/doc/2016-11-15/">
<requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId> <requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId>
<vpcId>{{ vpc_id }}</vpcId> <vpcId>{{ vpc_id }}</vpcId>
<{{ attribute }}> <{{ attribute }}>
@ -112,7 +189,59 @@ DESCRIBE_VPC_ATTRIBUTE_RESPONSE = """
</DescribeVpcAttributeResponse>""" </DescribeVpcAttributeResponse>"""
MODIFY_VPC_ATTRIBUTE_RESPONSE = """ MODIFY_VPC_ATTRIBUTE_RESPONSE = """
<ModifyVpcAttributeResponse xmlns="http://ec2.amazonaws.com/doc/2013-10-15/"> <ModifyVpcAttributeResponse xmlns="http://ec2.amazonaws.com/doc/2016-11-15/">
<requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId> <requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId>
<return>true</return> <return>true</return>
</ModifyVpcAttributeResponse>""" </ModifyVpcAttributeResponse>"""
ASSOCIATE_VPC_CIDR_BLOCK_RESPONSE = """
<AssociateVpcCidrBlockResponse xmlns="http://ec2.amazonaws.com/doc/2016-11-15/">
<requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId>
<vpcId>{{vpc_id}}</vpcId>
<cidrBlockAssociation>
<associationId>{{association_id}}</associationId>
<cidrBlock>{{cidr_block}}</cidrBlock>
<cidrBlockState>
<state>{{cidr_block_state}}</state>
</cidrBlockState>
</cidrBlockAssociation>
</AssociateVpcCidrBlockResponse>"""
DISASSOCIATE_VPC_CIDR_BLOCK_RESPONSE = """
<DisassociateVpcCidrBlockResponse xmlns="http://ec2.amazonaws.com/doc/2016-11-15/">
<requestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</requestId>
<vpcId>{{vpc_id}}</vpcId>
<cidrBlockAssociation>
<associationId>{{association_id}}</associationId>
<cidrBlock>{{cidr_block}}</cidrBlock>
<cidrBlockState>
<state>{{cidr_block_state}}</state>
</cidrBlockState>
</cidrBlockAssociation>
</DisassociateVpcCidrBlockResponse>"""
IPV6_ASSOCIATE_VPC_CIDR_BLOCK_RESPONSE = """
<AssociateVpcCidrBlockResponse xmlns="http://ec2.amazonaws.com/doc/2016-11-15/">
<requestId>33af6c54-1139-4d50-b4f7-15a8example</requestId>
<vpcId>{{vpc_id}}</vpcId>
<ipv6CidrBlockAssociation>
<associationId>{{association_id}}</associationId>
<ipv6CidrBlock>{{cidr_block}}</ipv6CidrBlock>
<ipv6CidrBlockState>
<state>{{cidr_block_state}}</state>
</ipv6CidrBlockState>
</ipv6CidrBlockAssociation>
</AssociateVpcCidrBlockResponse>"""
IPV6_DISASSOCIATE_VPC_CIDR_BLOCK_RESPONSE = """
<DisassociateVpcCidrBlockResponse xmlns="http://ec2.amazonaws.com/doc/2016-11-15/">
<requestId>33af6c54-1139-4d50-b4f7-15a8example</requestId>
<vpcId>{{vpc_id}}</vpcId>
<ipv6CidrBlockAssociation>
<associationId>{{association_id}}</associationId>
<ipv6CidrBlock>{{cidr_block}}</ipv6CidrBlock>
<ipv6CidrBlockState>
<state>{{cidr_block_state}}</state>
</ipv6CidrBlockState>
</ipv6CidrBlockAssociation>
</DisassociateVpcCidrBlockResponse>"""

View File

@ -27,6 +27,7 @@ EC2_RESOURCE_TO_PREFIX = {
'reservation': 'r', 'reservation': 'r',
'volume': 'vol', 'volume': 'vol',
'vpc': 'vpc', 'vpc': 'vpc',
'vpc-cidr-association-id': 'vpc-cidr-assoc',
'vpc-elastic-ip': 'eipalloc', 'vpc-elastic-ip': 'eipalloc',
'vpc-elastic-ip-association': 'eipassoc', 'vpc-elastic-ip-association': 'eipassoc',
'vpc-peering-connection': 'pcx', 'vpc-peering-connection': 'pcx',
@ -34,16 +35,17 @@ EC2_RESOURCE_TO_PREFIX = {
'vpn-gateway': 'vgw'} 'vpn-gateway': 'vgw'}
EC2_PREFIX_TO_RESOURCE = dict((v, k) EC2_PREFIX_TO_RESOURCE = dict((v, k) for (k, v) in EC2_RESOURCE_TO_PREFIX.items())
for (k, v) in EC2_RESOURCE_TO_PREFIX.items())
def random_resource_id(size=8):
chars = list(range(10)) + ['a', 'b', 'c', 'd', 'e', 'f']
resource_id = ''.join(six.text_type(random.choice(chars)) for x in range(size))
return resource_id
def random_id(prefix='', size=8): def random_id(prefix='', size=8):
chars = list(range(10)) + ['a', 'b', 'c', 'd', 'e', 'f'] return '{0}-{1}'.format(prefix, random_resource_id(size))
resource_id = ''.join(six.text_type(random.choice(chars))
for x in range(size))
return '{0}-{1}'.format(prefix, resource_id)
def random_ami_id(): def random_ami_id():
@ -110,6 +112,10 @@ def random_vpc_id():
return random_id(prefix=EC2_RESOURCE_TO_PREFIX['vpc']) return random_id(prefix=EC2_RESOURCE_TO_PREFIX['vpc'])
def random_vpc_cidr_association_id():
return random_id(prefix=EC2_RESOURCE_TO_PREFIX['vpc-cidr-association-id'])
def random_vpc_peering_connection_id(): def random_vpc_peering_connection_id():
return random_id(prefix=EC2_RESOURCE_TO_PREFIX['vpc-peering-connection']) return random_id(prefix=EC2_RESOURCE_TO_PREFIX['vpc-peering-connection'])
@ -165,6 +171,10 @@ def random_ip():
) )
def random_ipv6_cidr():
return "2400:6500:{}:{}::/56".format(random_resource_id(4), random_resource_id(4))
def generate_route_id(route_table_id, cidr_block): def generate_route_id(route_table_id, cidr_block):
return "%s~%s" % (route_table_id, cidr_block) return "%s~%s" % (route_table_id, cidr_block)

View File

@ -1,14 +1,14 @@
from __future__ import unicode_literals from __future__ import unicode_literals
# from datetime import datetime
import hashlib
from copy import copy
from random import random from random import random
from moto.core import BaseBackend, BaseModel from moto.core import BaseBackend, BaseModel
from moto.ec2 import ec2_backends from moto.ec2 import ec2_backends
from copy import copy
import hashlib
from moto.ecr.exceptions import ImageNotFoundException, RepositoryNotFoundException from moto.ecr.exceptions import ImageNotFoundException, RepositoryNotFoundException
from botocore.exceptions import ParamValidationError
DEFAULT_REGISTRY_ID = '012345678910' DEFAULT_REGISTRY_ID = '012345678910'
@ -145,6 +145,17 @@ class Image(BaseObject):
response_object['imagePushedAt'] = '2017-05-09' response_object['imagePushedAt'] = '2017-05-09'
return response_object return response_object
@property
def response_batch_get_image(self):
response_object = {}
response_object['imageId'] = {}
response_object['imageId']['imageTag'] = self.image_tag
response_object['imageId']['imageDigest'] = self.get_image_digest()
response_object['imageManifest'] = self.image_manifest
response_object['repositoryName'] = self.repository
response_object['registryId'] = self.registry_id
return response_object
class ECRBackend(BaseBackend): class ECRBackend(BaseBackend):
@ -245,6 +256,39 @@ class ECRBackend(BaseBackend):
repository.images.append(image) repository.images.append(image)
return image return image
def batch_get_image(self, repository_name, registry_id=None, image_ids=None, accepted_media_types=None):
if repository_name in self.repositories:
repository = self.repositories[repository_name]
else:
raise RepositoryNotFoundException(repository_name, registry_id or DEFAULT_REGISTRY_ID)
if not image_ids:
raise ParamValidationError(msg='Missing required parameter in input: "imageIds"')
response = {
'images': [],
'failures': [],
}
for image_id in image_ids:
found = False
for image in repository.images:
if (('imageDigest' in image_id and image.get_image_digest() == image_id['imageDigest']) or
('imageTag' in image_id and image.image_tag == image_id['imageTag'])):
found = True
response['images'].append(image.response_batch_get_image)
if not found:
response['failures'].append({
'imageId': {
'imageTag': image_id.get('imageTag', 'null')
},
'failureCode': 'ImageNotFound',
'failureReason': 'Requested image not found'
})
return response
ecr_backends = {} ecr_backends = {}
for region, ec2_backend in ec2_backends.items(): for region, ec2_backend in ec2_backends.items():

View File

@ -89,9 +89,13 @@ class ECRResponse(BaseResponse):
'ECR.batch_delete_image is not yet implemented') 'ECR.batch_delete_image is not yet implemented')
def batch_get_image(self): def batch_get_image(self):
if self.is_not_dryrun('BatchGetImage'): repository_str = self._get_param('repositoryName')
raise NotImplementedError( registry_id = self._get_param('registryId')
'ECR.batch_get_image is not yet implemented') image_ids = self._get_param('imageIds')
accepted_media_types = self._get_param('acceptedMediaTypes')
response = self.ecr_backend.batch_get_image(repository_str, registry_id, image_ids, accepted_media_types)
return json.dumps(response)
def can_paginate(self): def can_paginate(self):
if self.is_not_dryrun('CanPaginate'): if self.is_not_dryrun('CanPaginate'):

View File

@ -24,7 +24,7 @@ class BaseObject(BaseModel):
def gen_response_object(self): def gen_response_object(self):
response_object = copy(self.__dict__) response_object = copy(self.__dict__)
for key, value in response_object.items(): for key, value in self.__dict__.items():
if '_' in key: if '_' in key:
response_object[self.camelCase(key)] = value response_object[self.camelCase(key)] = value
del response_object[key] del response_object[key]
@ -61,7 +61,11 @@ class Cluster(BaseObject):
@classmethod @classmethod
def create_from_cloudformation_json(cls, resource_name, cloudformation_json, region_name): def create_from_cloudformation_json(cls, resource_name, cloudformation_json, region_name):
properties = cloudformation_json['Properties'] # if properties is not provided, cloudformation will use the default values for all properties
if 'Properties' in cloudformation_json:
properties = cloudformation_json['Properties']
else:
properties = {}
ecs_backend = ecs_backends[region_name] ecs_backend = ecs_backends[region_name]
return ecs_backend.create_cluster( return ecs_backend.create_cluster(
@ -109,6 +113,10 @@ class TaskDefinition(BaseObject):
del response_object['arn'] del response_object['arn']
return response_object return response_object
@property
def physical_resource_id(self):
return self.arn
@classmethod @classmethod
def create_from_cloudformation_json(cls, resource_name, cloudformation_json, region_name): def create_from_cloudformation_json(cls, resource_name, cloudformation_json, region_name):
properties = cloudformation_json['Properties'] properties = cloudformation_json['Properties']
@ -502,10 +510,27 @@ class EC2ContainerServiceBackend(BaseBackend):
def _calculate_task_resource_requirements(task_definition): def _calculate_task_resource_requirements(task_definition):
resource_requirements = {"CPU": 0, "MEMORY": 0, "PORTS": [], "PORTS_UDP": []} resource_requirements = {"CPU": 0, "MEMORY": 0, "PORTS": [], "PORTS_UDP": []}
for container_definition in task_definition.container_definitions: for container_definition in task_definition.container_definitions:
resource_requirements["CPU"] += container_definition.get('cpu') # cloudformation uses capitalized properties, while boto uses all lower case
resource_requirements["MEMORY"] += container_definition.get("memory")
for port_mapping in container_definition.get("portMappings", []): # CPU is optional
resource_requirements["PORTS"].append(port_mapping.get('hostPort')) resource_requirements["CPU"] += container_definition.get('cpu',
container_definition.get('Cpu', 0))
# either memory or memory reservation must be provided
if 'Memory' in container_definition or 'MemoryReservation' in container_definition:
resource_requirements["MEMORY"] += container_definition.get(
"Memory", container_definition.get('MemoryReservation'))
else:
resource_requirements["MEMORY"] += container_definition.get(
"memory", container_definition.get('memoryReservation'))
port_mapping_key = 'PortMappings' if 'PortMappings' in container_definition else 'portMappings'
for port_mapping in container_definition.get(port_mapping_key, []):
if 'hostPort' in port_mapping:
resource_requirements["PORTS"].append(port_mapping.get('hostPort'))
elif 'HostPort' in port_mapping:
resource_requirements["PORTS"].append(port_mapping.get('HostPort'))
return resource_requirements return resource_requirements
@staticmethod @staticmethod

View File

@ -268,7 +268,7 @@ class ELBBackend(BaseBackend):
protocol = port['protocol'] protocol = port['protocol']
instance_port = port['instance_port'] instance_port = port['instance_port']
lb_port = port['load_balancer_port'] lb_port = port['load_balancer_port']
ssl_certificate_id = port.get('sslcertificate_id') ssl_certificate_id = port.get('ssl_certificate_id')
for listener in balancer.listeners: for listener in balancer.listeners:
if lb_port == listener.load_balancer_port: if lb_port == listener.load_balancer_port:
if protocol != listener.protocol: if protocol != listener.protocol:

View File

@ -61,7 +61,7 @@ class ELBResponse(BaseResponse):
start = all_names.index(marker) + 1 start = all_names.index(marker) + 1
else: else:
start = 0 start = 0
page_size = self._get_param('PageSize', 50) # the default is 400, but using 50 to make testing easier page_size = self._get_int_param('PageSize', 50) # the default is 400, but using 50 to make testing easier
load_balancers_resp = all_load_balancers[start:start + page_size] load_balancers_resp = all_load_balancers[start:start + page_size]
next_marker = None next_marker = None
if len(all_load_balancers) > start + page_size: if len(all_load_balancers) > start + page_size:

View File

@ -486,6 +486,10 @@ class ELBv2Backend(BaseBackend):
arn = load_balancer_arn.replace(':loadbalancer/', ':listener/') + "/%s%s" % (port, id(self)) arn = load_balancer_arn.replace(':loadbalancer/', ':listener/') + "/%s%s" % (port, id(self))
listener = FakeListener(load_balancer_arn, arn, protocol, port, ssl_policy, certificate, default_actions) listener = FakeListener(load_balancer_arn, arn, protocol, port, ssl_policy, certificate, default_actions)
balancer.listeners[listener.arn] = listener balancer.listeners[listener.arn] = listener
for action in default_actions:
if action['target_group_arn'] in self.target_groups.keys():
target_group = self.target_groups[action['target_group_arn']]
target_group.load_balancer_arns.append(load_balancer_arn)
return listener return listener
def describe_load_balancers(self, arns, names): def describe_load_balancers(self, arns, names):

View File

@ -242,7 +242,7 @@ class ELBV2Response(BaseResponse):
start = all_names.index(marker) + 1 start = all_names.index(marker) + 1
else: else:
start = 0 start = 0
page_size = self._get_param('PageSize', 50) # the default is 400, but using 50 to make testing easier page_size = self._get_int_param('PageSize', 50) # the default is 400, but using 50 to make testing easier
load_balancers_resp = all_load_balancers[start:start + page_size] load_balancers_resp = all_load_balancers[start:start + page_size]
next_marker = None next_marker = None
if len(all_load_balancers) > start + page_size: if len(all_load_balancers) > start + page_size:
@ -468,7 +468,7 @@ class ELBV2Response(BaseResponse):
def describe_account_limits(self): def describe_account_limits(self):
# Supports paging but not worth implementing yet # Supports paging but not worth implementing yet
# marker = self._get_param('Marker') # marker = self._get_param('Marker')
# page_size = self._get_param('PageSize') # page_size = self._get_int_param('PageSize')
limits = { limits = {
'application-load-balancers': 20, 'application-load-balancers': 20,
@ -489,7 +489,7 @@ class ELBV2Response(BaseResponse):
names = self._get_multi_param('Names.member.') names = self._get_multi_param('Names.member.')
# Supports paging but not worth implementing yet # Supports paging but not worth implementing yet
# marker = self._get_param('Marker') # marker = self._get_param('Marker')
# page_size = self._get_param('PageSize') # page_size = self._get_int_param('PageSize')
policies = SSL_POLICIES policies = SSL_POLICIES
if names: if names:

View File

@ -462,10 +462,10 @@ DESCRIBE_JOB_FLOWS_TEMPLATE = """<DescribeJobFlowsResponse xmlns="http://elastic
<ScriptBootstrapAction> <ScriptBootstrapAction>
<Args> <Args>
{% for arg in bootstrap_action.args %} {% for arg in bootstrap_action.args %}
<member>{{ arg }}</member> <member>{{ arg | escape }}</member>
{% endfor %} {% endfor %}
</Args> </Args>
<Path>{{ bootstrap_action.script_path }}</Path> <Path>{{ bootstrap_action.script_path | escape }}</Path>
</ScriptBootstrapAction> </ScriptBootstrapAction>
</BootstrapActionConfig> </BootstrapActionConfig>
</member> </member>
@ -568,12 +568,12 @@ DESCRIBE_JOB_FLOWS_TEMPLATE = """<DescribeJobFlowsResponse xmlns="http://elastic
<MainClass>{{ step.main_class }}</MainClass> <MainClass>{{ step.main_class }}</MainClass>
<Args> <Args>
{% for arg in step.args %} {% for arg in step.args %}
<member>{{ arg }}</member> <member>{{ arg | escape }}</member>
{% endfor %} {% endfor %}
</Args> </Args>
<Properties/> <Properties/>
</HadoopJarStep> </HadoopJarStep>
<Name>{{ step.name }}</Name> <Name>{{ step.name | escape }}</Name>
</StepConfig> </StepConfig>
</member> </member>
{% endfor %} {% endfor %}
@ -596,7 +596,7 @@ DESCRIBE_STEP_TEMPLATE = """<DescribeStepResponse xmlns="http://elasticmapreduce
<Config> <Config>
<Args> <Args>
{% for arg in step.args %} {% for arg in step.args %}
<member>{{ arg }}</member> <member>{{ arg | escape }}</member>
{% endfor %} {% endfor %}
</Args> </Args>
<Jar>{{ step.jar }}</Jar> <Jar>{{ step.jar }}</Jar>
@ -605,13 +605,13 @@ DESCRIBE_STEP_TEMPLATE = """<DescribeStepResponse xmlns="http://elasticmapreduce
{% for key, val in step.properties.items() %} {% for key, val in step.properties.items() %}
<member> <member>
<key>{{ key }}</key> <key>{{ key }}</key>
<value>{{ val }}</value> <value>{{ val | escape }}</value>
</member> </member>
{% endfor %} {% endfor %}
</Properties> </Properties>
</Config> </Config>
<Id>{{ step.id }}</Id> <Id>{{ step.id }}</Id>
<Name>{{ step.name }}</Name> <Name>{{ step.name | escape }}</Name>
<Status> <Status>
<!-- does not exist for botocore 1.4.28 <!-- does not exist for botocore 1.4.28
<FailureDetails> <FailureDetails>
@ -646,7 +646,7 @@ LIST_BOOTSTRAP_ACTIONS_TEMPLATE = """<ListBootstrapActionsResponse xmlns="http:/
<member> <member>
<Args> <Args>
{% for arg in bootstrap_action.args %} {% for arg in bootstrap_action.args %}
<member>{{ arg }}</member> <member>{{ arg | escape }}</member>
{% endfor %} {% endfor %}
</Args> </Args>
<Name>{{ bootstrap_action.name }}</Name> <Name>{{ bootstrap_action.name }}</Name>
@ -760,22 +760,22 @@ LIST_STEPS_TEMPLATE = """<ListStepsResponse xmlns="http://elasticmapreduce.amazo
<Config> <Config>
<Args> <Args>
{% for arg in step.args %} {% for arg in step.args %}
<member>{{ arg }}</member> <member>{{ arg | escape }}</member>
{% endfor %} {% endfor %}
</Args> </Args>
<Jar>{{ step.jar }}</Jar> <Jar>{{ step.jar | escape }}</Jar>
<MainClass/> <MainClass/>
<Properties> <Properties>
{% for key, val in step.properties.items() %} {% for key, val in step.properties.items() %}
<member> <member>
<key>{{ key }}</key> <key>{{ key }}</key>
<value>{{ val }}</value> <value>{{ val | escape }}</value>
</member> </member>
{% endfor %} {% endfor %}
</Properties> </Properties>
</Config> </Config>
<Id>{{ step.id }}</Id> <Id>{{ step.id }}</Id>
<Name>{{ step.name }}</Name> <Name>{{ step.name | escape }}</Name>
<Status> <Status>
<!-- does not exist for botocore 1.4.28 <!-- does not exist for botocore 1.4.28
<FailureDetails> <FailureDetails>

View File

@ -2,42 +2,101 @@ from __future__ import unicode_literals
import hashlib import hashlib
import datetime
import boto.glacier import boto.glacier
from moto.core import BaseBackend, BaseModel from moto.core import BaseBackend, BaseModel
from .utils import get_job_id from .utils import get_job_id
class ArchiveJob(BaseModel): class Job(BaseModel):
def __init__(self, tier):
self.st = datetime.datetime.now()
def __init__(self, job_id, archive_id): if tier.lower() == "expedited":
self.et = self.st + datetime.timedelta(seconds=2)
elif tier.lower() == "bulk":
self.et = self.st + datetime.timedelta(seconds=10)
else:
# Standard
self.et = self.st + datetime.timedelta(seconds=5)
class ArchiveJob(Job):
def __init__(self, job_id, tier, arn, archive_id):
self.job_id = job_id self.job_id = job_id
self.tier = tier
self.arn = arn
self.archive_id = archive_id self.archive_id = archive_id
Job.__init__(self, tier)
def to_dict(self): def to_dict(self):
return { d = {
"Action": "InventoryRetrieval", "Action": "ArchiveRetrieval",
"ArchiveId": self.archive_id, "ArchiveId": self.archive_id,
"ArchiveSizeInBytes": 0, "ArchiveSizeInBytes": 0,
"ArchiveSHA256TreeHash": None, "ArchiveSHA256TreeHash": None,
"Completed": True, "Completed": False,
"CompletionDate": "2013-03-20T17:03:43.221Z", "CreationDate": self.st.strftime("%Y-%m-%dT%H:%M:%S.000Z"),
"CreationDate": "2013-03-20T17:03:43.221Z", "InventorySizeInBytes": 0,
"InventorySizeInBytes": "0",
"JobDescription": None, "JobDescription": None,
"JobId": self.job_id, "JobId": self.job_id,
"RetrievalByteRange": None, "RetrievalByteRange": None,
"SHA256TreeHash": None, "SHA256TreeHash": None,
"SNSTopic": None, "SNSTopic": None,
"StatusCode": "Succeeded", "StatusCode": "InProgress",
"StatusMessage": None, "StatusMessage": None,
"VaultARN": None, "VaultARN": self.arn,
"Tier": self.tier
} }
if datetime.datetime.now() > self.et:
d["Completed"] = True
d["CompletionDate"] = self.et.strftime("%Y-%m-%dT%H:%M:%S.000Z")
d["InventorySizeInBytes"] = 10000
d["StatusCode"] = "Succeeded"
return d
class InventoryJob(Job):
def __init__(self, job_id, tier, arn):
self.job_id = job_id
self.tier = tier
self.arn = arn
Job.__init__(self, tier)
def to_dict(self):
d = {
"Action": "InventoryRetrieval",
"ArchiveSHA256TreeHash": None,
"Completed": False,
"CreationDate": self.st.strftime("%Y-%m-%dT%H:%M:%S.000Z"),
"InventorySizeInBytes": 0,
"JobDescription": None,
"JobId": self.job_id,
"RetrievalByteRange": None,
"SHA256TreeHash": None,
"SNSTopic": None,
"StatusCode": "InProgress",
"StatusMessage": None,
"VaultARN": self.arn,
"Tier": self.tier
}
if datetime.datetime.now() > self.et:
d["Completed"] = True
d["CompletionDate"] = self.et.strftime("%Y-%m-%dT%H:%M:%S.000Z")
d["InventorySizeInBytes"] = 10000
d["StatusCode"] = "Succeeded"
return d
class Vault(BaseModel): class Vault(BaseModel):
def __init__(self, vault_name, region): def __init__(self, vault_name, region):
self.st = datetime.datetime.now()
self.vault_name = vault_name self.vault_name = vault_name
self.region = region self.region = region
self.archives = {} self.archives = {}
@ -48,29 +107,57 @@ class Vault(BaseModel):
return "arn:aws:glacier:{0}:012345678901:vaults/{1}".format(self.region, self.vault_name) return "arn:aws:glacier:{0}:012345678901:vaults/{1}".format(self.region, self.vault_name)
def to_dict(self): def to_dict(self):
return { archives_size = 0
"CreationDate": "2013-03-20T17:03:43.221Z", for k in self.archives:
"LastInventoryDate": "2013-03-20T17:03:43.221Z", archives_size += self.archives[k]["size"]
"NumberOfArchives": None, d = {
"SizeInBytes": None, "CreationDate": self.st.strftime("%Y-%m-%dT%H:%M:%S.000Z"),
"LastInventoryDate": self.st.strftime("%Y-%m-%dT%H:%M:%S.000Z"),
"NumberOfArchives": len(self.archives),
"SizeInBytes": archives_size,
"VaultARN": self.arn, "VaultARN": self.arn,
"VaultName": self.vault_name, "VaultName": self.vault_name,
} }
return d
def create_archive(self, body): def create_archive(self, body, description):
archive_id = hashlib.sha256(body).hexdigest() archive_id = hashlib.md5(body).hexdigest()
self.archives[archive_id] = body self.archives[archive_id] = {}
self.archives[archive_id]["body"] = body
self.archives[archive_id]["size"] = len(body)
self.archives[archive_id]["sha256"] = hashlib.sha256(body).hexdigest()
self.archives[archive_id]["creation_date"] = datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%S.000Z")
self.archives[archive_id]["description"] = description
return archive_id return archive_id
def get_archive_body(self, archive_id): def get_archive_body(self, archive_id):
return self.archives[archive_id] return self.archives[archive_id]["body"]
def get_archive_list(self):
archive_list = []
for a in self.archives:
archive = self.archives[a]
aobj = {
"ArchiveId": a,
"ArchiveDescription": archive["description"],
"CreationDate": archive["creation_date"],
"Size": archive["size"],
"SHA256TreeHash": archive["sha256"]
}
archive_list.append(aobj)
return archive_list
def delete_archive(self, archive_id): def delete_archive(self, archive_id):
return self.archives.pop(archive_id) return self.archives.pop(archive_id)
def initiate_job(self, archive_id): def initiate_job(self, job_type, tier, archive_id):
job_id = get_job_id() job_id = get_job_id()
job = ArchiveJob(job_id, archive_id)
if job_type == "inventory-retrieval":
job = InventoryJob(job_id, tier, self.arn)
elif job_type == "archive-retrieval":
job = ArchiveJob(job_id, tier, self.arn, archive_id)
self.jobs[job_id] = job self.jobs[job_id] = job
return job_id return job_id
@ -80,10 +167,24 @@ class Vault(BaseModel):
def describe_job(self, job_id): def describe_job(self, job_id):
return self.jobs.get(job_id) return self.jobs.get(job_id)
def job_ready(self, job_id):
job = self.describe_job(job_id)
jobj = job.to_dict()
return jobj["Completed"]
def get_job_output(self, job_id): def get_job_output(self, job_id):
job = self.describe_job(job_id) job = self.describe_job(job_id)
archive_body = self.get_archive_body(job.archive_id) jobj = job.to_dict()
return archive_body if jobj["Action"] == "InventoryRetrieval":
archives = self.get_archive_list()
return {
"VaultARN": self.arn,
"InventoryDate": jobj["CompletionDate"],
"ArchiveList": archives
}
else:
archive_body = self.get_archive_body(job.archive_id)
return archive_body
class GlacierBackend(BaseBackend): class GlacierBackend(BaseBackend):
@ -109,9 +210,9 @@ class GlacierBackend(BaseBackend):
def delete_vault(self, vault_name): def delete_vault(self, vault_name):
self.vaults.pop(vault_name) self.vaults.pop(vault_name)
def initiate_job(self, vault_name, archive_id): def initiate_job(self, vault_name, job_type, tier, archive_id):
vault = self.get_vault(vault_name) vault = self.get_vault(vault_name)
job_id = vault.initiate_job(archive_id) job_id = vault.initiate_job(job_type, tier, archive_id)
return job_id return job_id
def list_jobs(self, vault_name): def list_jobs(self, vault_name):

View File

@ -72,17 +72,25 @@ class GlacierResponse(_TemplateEnvironmentMixin):
def _vault_archive_response(self, request, full_url, headers): def _vault_archive_response(self, request, full_url, headers):
method = request.method method = request.method
body = request.body if hasattr(request, 'body'):
body = request.body
else:
body = request.data
description = ""
if 'x-amz-archive-description' in request.headers:
description = request.headers['x-amz-archive-description']
parsed_url = urlparse(full_url) parsed_url = urlparse(full_url)
querystring = parse_qs(parsed_url.query, keep_blank_values=True) querystring = parse_qs(parsed_url.query, keep_blank_values=True)
vault_name = full_url.split("/")[-2] vault_name = full_url.split("/")[-2]
if method == 'POST': if method == 'POST':
return self._vault_archive_response_post(vault_name, body, querystring, headers) return self._vault_archive_response_post(vault_name, body, description, querystring, headers)
else:
return 400, headers, "400 Bad Request"
def _vault_archive_response_post(self, vault_name, body, querystring, headers): def _vault_archive_response_post(self, vault_name, body, description, querystring, headers):
vault = self.backend.get_vault(vault_name) vault = self.backend.get_vault(vault_name)
vault_id = vault.create_archive(body) vault_id = vault.create_archive(body, description)
headers['x-amz-archive-id'] = vault_id headers['x-amz-archive-id'] = vault_id
return 201, headers, "" return 201, headers, ""
@ -110,7 +118,10 @@ class GlacierResponse(_TemplateEnvironmentMixin):
def _vault_jobs_response(self, request, full_url, headers): def _vault_jobs_response(self, request, full_url, headers):
method = request.method method = request.method
body = request.body if hasattr(request, 'body'):
body = request.body
else:
body = request.data
account_id = full_url.split("/")[1] account_id = full_url.split("/")[1]
vault_name = full_url.split("/")[-2] vault_name = full_url.split("/")[-2]
@ -125,11 +136,17 @@ class GlacierResponse(_TemplateEnvironmentMixin):
}) })
elif method == 'POST': elif method == 'POST':
json_body = json.loads(body.decode("utf-8")) json_body = json.loads(body.decode("utf-8"))
archive_id = json_body['ArchiveId'] job_type = json_body['Type']
job_id = self.backend.initiate_job(vault_name, archive_id) archive_id = None
if 'ArchiveId' in json_body:
archive_id = json_body['ArchiveId']
if 'Tier' in json_body:
tier = json_body["Tier"]
else:
tier = "Standard"
job_id = self.backend.initiate_job(vault_name, job_type, tier, archive_id)
headers['x-amz-job-id'] = job_id headers['x-amz-job-id'] = job_id
headers[ headers['Location'] = "/{0}/vaults/{1}/jobs/{2}".format(account_id, vault_name, job_id)
'Location'] = "/{0}/vaults/{1}/jobs/{2}".format(account_id, vault_name, job_id)
return 202, headers, "" return 202, headers, ""
@classmethod @classmethod
@ -155,8 +172,14 @@ class GlacierResponse(_TemplateEnvironmentMixin):
def _vault_jobs_output_response(self, request, full_url, headers): def _vault_jobs_output_response(self, request, full_url, headers):
vault_name = full_url.split("/")[-4] vault_name = full_url.split("/")[-4]
job_id = full_url.split("/")[-2] job_id = full_url.split("/")[-2]
vault = self.backend.get_vault(vault_name) vault = self.backend.get_vault(vault_name)
output = vault.get_job_output(job_id) if vault.job_ready(job_id):
headers['content-type'] = 'application/octet-stream' output = vault.get_job_output(job_id)
return 200, headers, output if isinstance(output, dict):
headers['content-type'] = 'application/json'
return 200, headers, json.dumps(output)
else:
headers['content-type'] = 'application/octet-stream'
return 200, headers, output
else:
return 404, headers, "404 Not Found"

View File

@ -122,7 +122,7 @@ class Role(BaseModel):
role = iam_backend.create_role( role = iam_backend.create_role(
role_name=resource_name, role_name=resource_name,
assume_role_policy_document=properties['AssumeRolePolicyDocument'], assume_role_policy_document=properties['AssumeRolePolicyDocument'],
path=properties['Path'], path=properties.get('Path', '/'),
) )
policies = properties.get('Policies', []) policies = properties.get('Policies', [])
@ -173,7 +173,7 @@ class InstanceProfile(BaseModel):
role_ids = properties['Roles'] role_ids = properties['Roles']
return iam_backend.create_instance_profile( return iam_backend.create_instance_profile(
name=resource_name, name=resource_name,
path=properties['Path'], path=properties.get('Path', '/'),
role_ids=role_ids, role_ids=role_ids,
) )
@ -349,6 +349,14 @@ class User(BaseModel):
raise IAMNotFoundException( raise IAMNotFoundException(
"Key {0} not found".format(access_key_id)) "Key {0} not found".format(access_key_id))
def update_access_key(self, access_key_id, status):
for key in self.access_keys:
if key.access_key_id == access_key_id:
key.status = status
break
else:
raise IAMNotFoundException("The Access Key with id {0} cannot be found".format(access_key_id))
def get_cfn_attribute(self, attribute_name): def get_cfn_attribute(self, attribute_name):
from moto.cloudformation.exceptions import UnformattedGetAttTemplateException from moto.cloudformation.exceptions import UnformattedGetAttTemplateException
if attribute_name == 'Arn': if attribute_name == 'Arn':
@ -817,6 +825,10 @@ class IAMBackend(BaseBackend):
key = user.create_access_key() key = user.create_access_key()
return key return key
def update_access_key(self, user_name, access_key_id, status):
user = self.get_user(user_name)
user.update_access_key(access_key_id, status)
def get_all_access_keys(self, user_name, marker=None, max_items=None): def get_all_access_keys(self, user_name, marker=None, max_items=None):
user = self.get_user(user_name) user = self.get_user(user_name)
keys = user.get_all_access_keys() keys = user.get_all_access_keys()

View File

@ -440,6 +440,14 @@ class IamResponse(BaseResponse):
template = self.response_template(CREATE_ACCESS_KEY_TEMPLATE) template = self.response_template(CREATE_ACCESS_KEY_TEMPLATE)
return template.render(key=key) return template.render(key=key)
def update_access_key(self):
user_name = self._get_param('UserName')
access_key_id = self._get_param('AccessKeyId')
status = self._get_param('Status')
iam_backend.update_access_key(user_name, access_key_id, status)
template = self.response_template(GENERIC_EMPTY_TEMPLATE)
return template.render(name='UpdateAccessKey')
def list_access_keys(self): def list_access_keys(self):
user_name = self._get_param('UserName') user_name = self._get_param('UserName')

View File

@ -16,9 +16,18 @@ class ResourceNotFoundException(IoTClientError):
class InvalidRequestException(IoTClientError): class InvalidRequestException(IoTClientError):
def __init__(self): def __init__(self, msg=None):
self.code = 400 self.code = 400
super(InvalidRequestException, self).__init__( super(InvalidRequestException, self).__init__(
"InvalidRequestException", "InvalidRequestException",
"The request is not valid." msg or "The request is not valid."
)
class VersionConflictException(IoTClientError):
def __init__(self, name):
self.code = 409
super(VersionConflictException, self).__init__(
'VersionConflictException',
'The version for thing %s does not match the expected version.' % name
) )

View File

@ -9,7 +9,8 @@ from moto.core import BaseBackend, BaseModel
from collections import OrderedDict from collections import OrderedDict
from .exceptions import ( from .exceptions import (
ResourceNotFoundException, ResourceNotFoundException,
InvalidRequestException InvalidRequestException,
VersionConflictException
) )
@ -44,6 +45,7 @@ class FakeThingType(BaseModel):
self.region_name = region_name self.region_name = region_name
self.thing_type_name = thing_type_name self.thing_type_name = thing_type_name
self.thing_type_properties = thing_type_properties self.thing_type_properties = thing_type_properties
self.thing_type_id = str(uuid.uuid4()) # I don't know the rule of id
t = time.time() t = time.time()
self.metadata = { self.metadata = {
'deprecated': False, 'deprecated': False,
@ -54,11 +56,37 @@ class FakeThingType(BaseModel):
def to_dict(self): def to_dict(self):
return { return {
'thingTypeName': self.thing_type_name, 'thingTypeName': self.thing_type_name,
'thingTypeId': self.thing_type_id,
'thingTypeProperties': self.thing_type_properties, 'thingTypeProperties': self.thing_type_properties,
'thingTypeMetadata': self.metadata 'thingTypeMetadata': self.metadata
} }
class FakeThingGroup(BaseModel):
def __init__(self, thing_group_name, parent_group_name, thing_group_properties, region_name):
self.region_name = region_name
self.thing_group_name = thing_group_name
self.thing_group_id = str(uuid.uuid4()) # I don't know the rule of id
self.version = 1 # TODO: tmp
self.parent_group_name = parent_group_name
self.thing_group_properties = thing_group_properties or {}
t = time.time()
self.metadata = {
'creationData': int(t * 1000) / 1000.0
}
self.arn = 'arn:aws:iot:%s:1:thinggroup/%s' % (self.region_name, thing_group_name)
self.things = OrderedDict()
def to_dict(self):
return {
'thingGroupName': self.thing_group_name,
'thingGroupId': self.thing_group_id,
'version': self.version,
'thingGroupProperties': self.thing_group_properties,
'thingGroupMetadata': self.metadata
}
class FakeCertificate(BaseModel): class FakeCertificate(BaseModel):
def __init__(self, certificate_pem, status, region_name): def __init__(self, certificate_pem, status, region_name):
m = hashlib.sha256() m = hashlib.sha256()
@ -137,6 +165,7 @@ class IoTBackend(BaseBackend):
self.region_name = region_name self.region_name = region_name
self.things = OrderedDict() self.things = OrderedDict()
self.thing_types = OrderedDict() self.thing_types = OrderedDict()
self.thing_groups = OrderedDict()
self.certificates = OrderedDict() self.certificates = OrderedDict()
self.policies = OrderedDict() self.policies = OrderedDict()
self.principal_policies = OrderedDict() self.principal_policies = OrderedDict()
@ -359,6 +388,125 @@ class IoTBackend(BaseBackend):
principals = [k[0] for k, v in self.principal_things.items() if k[1] == thing_name] principals = [k[0] for k, v in self.principal_things.items() if k[1] == thing_name]
return principals return principals
def describe_thing_group(self, thing_group_name):
thing_groups = [_ for _ in self.thing_groups.values() if _.thing_group_name == thing_group_name]
if len(thing_groups) == 0:
raise ResourceNotFoundException()
return thing_groups[0]
def create_thing_group(self, thing_group_name, parent_group_name, thing_group_properties):
thing_group = FakeThingGroup(thing_group_name, parent_group_name, thing_group_properties, self.region_name)
self.thing_groups[thing_group.arn] = thing_group
return thing_group.thing_group_name, thing_group.arn, thing_group.thing_group_id
def delete_thing_group(self, thing_group_name, expected_version):
thing_group = self.describe_thing_group(thing_group_name)
del self.thing_groups[thing_group.arn]
def list_thing_groups(self, parent_group, name_prefix_filter, recursive):
thing_groups = self.thing_groups.values()
return thing_groups
def update_thing_group(self, thing_group_name, thing_group_properties, expected_version):
thing_group = self.describe_thing_group(thing_group_name)
if expected_version and expected_version != thing_group.version:
raise VersionConflictException(thing_group_name)
attribute_payload = thing_group_properties.get('attributePayload', None)
if attribute_payload is not None and 'attributes' in attribute_payload:
do_merge = attribute_payload.get('merge', False)
attributes = attribute_payload['attributes']
if not do_merge:
thing_group.thing_group_properties['attributePayload']['attributes'] = attributes
else:
thing_group.thing_group_properties['attributePayload']['attributes'].update(attributes)
elif attribute_payload is not None and 'attributes' not in attribute_payload:
thing_group.attributes = {}
thing_group.version = thing_group.version + 1
return thing_group.version
def _identify_thing_group(self, thing_group_name, thing_group_arn):
# identify thing group
if thing_group_name is None and thing_group_arn is None:
raise InvalidRequestException(
' Both thingGroupArn and thingGroupName are empty. Need to specify at least one of them'
)
if thing_group_name is not None:
thing_group = self.describe_thing_group(thing_group_name)
if thing_group_arn and thing_group.arn != thing_group_arn:
raise InvalidRequestException(
'ThingGroupName thingGroupArn does not match specified thingGroupName in request'
)
elif thing_group_arn is not None:
if thing_group_arn not in self.thing_groups:
raise InvalidRequestException()
thing_group = self.thing_groups[thing_group_arn]
return thing_group
def _identify_thing(self, thing_name, thing_arn):
# identify thing
if thing_name is None and thing_arn is None:
raise InvalidRequestException(
'Both thingArn and thingName are empty. Need to specify at least one of them'
)
if thing_name is not None:
thing = self.describe_thing(thing_name)
if thing_arn and thing.arn != thing_arn:
raise InvalidRequestException(
'ThingName thingArn does not match specified thingName in request'
)
elif thing_arn is not None:
if thing_arn not in self.things:
raise InvalidRequestException()
thing = self.things[thing_arn]
return thing
def add_thing_to_thing_group(self, thing_group_name, thing_group_arn, thing_name, thing_arn):
thing_group = self._identify_thing_group(thing_group_name, thing_group_arn)
thing = self._identify_thing(thing_name, thing_arn)
if thing.arn in thing_group.things:
# aws ignores duplicate registration
return
thing_group.things[thing.arn] = thing
def remove_thing_from_thing_group(self, thing_group_name, thing_group_arn, thing_name, thing_arn):
thing_group = self._identify_thing_group(thing_group_name, thing_group_arn)
thing = self._identify_thing(thing_name, thing_arn)
if thing.arn not in thing_group.things:
# aws ignores non-registered thing
return
del thing_group.things[thing.arn]
def list_things_in_thing_group(self, thing_group_name, recursive):
thing_group = self.describe_thing_group(thing_group_name)
return thing_group.things.values()
def list_thing_groups_for_thing(self, thing_name):
thing = self.describe_thing(thing_name)
all_thing_groups = self.list_thing_groups(None, None, None)
ret = []
for thing_group in all_thing_groups:
if thing.arn in thing_group.things:
ret.append({
'groupName': thing_group.thing_group_name,
'groupArn': thing_group.arn
})
return ret
def update_thing_groups_for_thing(self, thing_name, thing_groups_to_add, thing_groups_to_remove):
thing = self.describe_thing(thing_name)
for thing_group_name in thing_groups_to_add:
thing_group = self.describe_thing_group(thing_group_name)
self.add_thing_to_thing_group(
thing_group.thing_group_name, None,
thing.thing_name, None
)
for thing_group_name in thing_groups_to_remove:
thing_group = self.describe_thing_group(thing_group_name)
self.remove_thing_from_thing_group(
thing_group.thing_group_name, None,
thing.thing_name, None
)
available_regions = boto3.session.Session().get_available_regions("iot") available_regions = boto3.session.Session().get_available_regions("iot")
iot_backends = {region: IoTBackend(region) for region in available_regions} iot_backends = {region: IoTBackend(region) for region in available_regions}

View File

@ -38,8 +38,7 @@ class IoTResponse(BaseResponse):
thing_types = self.iot_backend.list_thing_types( thing_types = self.iot_backend.list_thing_types(
thing_type_name=thing_type_name thing_type_name=thing_type_name
) )
# TODO: implement pagination in the future
# TODO: support next_token and max_results
next_token = None next_token = None
return json.dumps(dict(thingTypes=[_.to_dict() for _ in thing_types], nextToken=next_token)) return json.dumps(dict(thingTypes=[_.to_dict() for _ in thing_types], nextToken=next_token))
@ -54,7 +53,7 @@ class IoTResponse(BaseResponse):
attribute_value=attribute_value, attribute_value=attribute_value,
thing_type_name=thing_type_name, thing_type_name=thing_type_name,
) )
# TODO: support next_token and max_results # TODO: implement pagination in the future
next_token = None next_token = None
return json.dumps(dict(things=[_.to_dict() for _ in things], nextToken=next_token)) return json.dumps(dict(things=[_.to_dict() for _ in things], nextToken=next_token))
@ -63,7 +62,6 @@ class IoTResponse(BaseResponse):
thing = self.iot_backend.describe_thing( thing = self.iot_backend.describe_thing(
thing_name=thing_name, thing_name=thing_name,
) )
print(thing.to_dict(include_default_client_id=True))
return json.dumps(thing.to_dict(include_default_client_id=True)) return json.dumps(thing.to_dict(include_default_client_id=True))
def describe_thing_type(self): def describe_thing_type(self):
@ -105,7 +103,7 @@ class IoTResponse(BaseResponse):
return json.dumps(dict()) return json.dumps(dict())
def create_keys_and_certificate(self): def create_keys_and_certificate(self):
set_as_active = self._get_param("setAsActive") set_as_active = self._get_bool_param("setAsActive")
cert, key_pair = self.iot_backend.create_keys_and_certificate( cert, key_pair = self.iot_backend.create_keys_and_certificate(
set_as_active=set_as_active, set_as_active=set_as_active,
) )
@ -135,7 +133,7 @@ class IoTResponse(BaseResponse):
# marker = self._get_param("marker") # marker = self._get_param("marker")
# ascending_order = self._get_param("ascendingOrder") # ascending_order = self._get_param("ascendingOrder")
certificates = self.iot_backend.list_certificates() certificates = self.iot_backend.list_certificates()
# TODO: handle pagination # TODO: implement pagination in the future
return json.dumps(dict(certificates=[_.to_dict() for _ in certificates])) return json.dumps(dict(certificates=[_.to_dict() for _ in certificates]))
def update_certificate(self): def update_certificate(self):
@ -162,7 +160,7 @@ class IoTResponse(BaseResponse):
# ascending_order = self._get_param("ascendingOrder") # ascending_order = self._get_param("ascendingOrder")
policies = self.iot_backend.list_policies() policies = self.iot_backend.list_policies()
# TODO: handle pagination # TODO: implement pagination in the future
return json.dumps(dict(policies=[_.to_dict() for _ in policies])) return json.dumps(dict(policies=[_.to_dict() for _ in policies]))
def get_policy(self): def get_policy(self):
@ -205,7 +203,7 @@ class IoTResponse(BaseResponse):
policies = self.iot_backend.list_principal_policies( policies = self.iot_backend.list_principal_policies(
principal_arn=principal principal_arn=principal
) )
# TODO: handle pagination # TODO: implement pagination in the future
next_marker = None next_marker = None
return json.dumps(dict(policies=[_.to_dict() for _ in policies], nextMarker=next_marker)) return json.dumps(dict(policies=[_.to_dict() for _ in policies], nextMarker=next_marker))
@ -217,7 +215,7 @@ class IoTResponse(BaseResponse):
principals = self.iot_backend.list_policy_principals( principals = self.iot_backend.list_policy_principals(
policy_name=policy_name, policy_name=policy_name,
) )
# TODO: handle pagination # TODO: implement pagination in the future
next_marker = None next_marker = None
return json.dumps(dict(principals=principals, nextMarker=next_marker)) return json.dumps(dict(principals=principals, nextMarker=next_marker))
@ -246,7 +244,7 @@ class IoTResponse(BaseResponse):
things = self.iot_backend.list_principal_things( things = self.iot_backend.list_principal_things(
principal_arn=principal, principal_arn=principal,
) )
# TODO: handle pagination # TODO: implement pagination in the future
next_token = None next_token = None
return json.dumps(dict(things=things, nextToken=next_token)) return json.dumps(dict(things=things, nextToken=next_token))
@ -256,3 +254,123 @@ class IoTResponse(BaseResponse):
thing_name=thing_name, thing_name=thing_name,
) )
return json.dumps(dict(principals=principals)) return json.dumps(dict(principals=principals))
def describe_thing_group(self):
thing_group_name = self._get_param("thingGroupName")
thing_group = self.iot_backend.describe_thing_group(
thing_group_name=thing_group_name,
)
return json.dumps(thing_group.to_dict())
def create_thing_group(self):
thing_group_name = self._get_param("thingGroupName")
parent_group_name = self._get_param("parentGroupName")
thing_group_properties = self._get_param("thingGroupProperties")
thing_group_name, thing_group_arn, thing_group_id = self.iot_backend.create_thing_group(
thing_group_name=thing_group_name,
parent_group_name=parent_group_name,
thing_group_properties=thing_group_properties,
)
return json.dumps(dict(
thingGroupName=thing_group_name,
thingGroupArn=thing_group_arn,
thingGroupId=thing_group_id)
)
def delete_thing_group(self):
thing_group_name = self._get_param("thingGroupName")
expected_version = self._get_param("expectedVersion")
self.iot_backend.delete_thing_group(
thing_group_name=thing_group_name,
expected_version=expected_version,
)
return json.dumps(dict())
def list_thing_groups(self):
# next_token = self._get_param("nextToken")
# max_results = self._get_int_param("maxResults")
parent_group = self._get_param("parentGroup")
name_prefix_filter = self._get_param("namePrefixFilter")
recursive = self._get_param("recursive")
thing_groups = self.iot_backend.list_thing_groups(
parent_group=parent_group,
name_prefix_filter=name_prefix_filter,
recursive=recursive,
)
next_token = None
rets = [{'groupName': _.thing_group_name, 'groupArn': _.arn} for _ in thing_groups]
# TODO: implement pagination in the future
return json.dumps(dict(thingGroups=rets, nextToken=next_token))
def update_thing_group(self):
thing_group_name = self._get_param("thingGroupName")
thing_group_properties = self._get_param("thingGroupProperties")
expected_version = self._get_param("expectedVersion")
version = self.iot_backend.update_thing_group(
thing_group_name=thing_group_name,
thing_group_properties=thing_group_properties,
expected_version=expected_version,
)
return json.dumps(dict(version=version))
def add_thing_to_thing_group(self):
thing_group_name = self._get_param("thingGroupName")
thing_group_arn = self._get_param("thingGroupArn")
thing_name = self._get_param("thingName")
thing_arn = self._get_param("thingArn")
self.iot_backend.add_thing_to_thing_group(
thing_group_name=thing_group_name,
thing_group_arn=thing_group_arn,
thing_name=thing_name,
thing_arn=thing_arn,
)
return json.dumps(dict())
def remove_thing_from_thing_group(self):
thing_group_name = self._get_param("thingGroupName")
thing_group_arn = self._get_param("thingGroupArn")
thing_name = self._get_param("thingName")
thing_arn = self._get_param("thingArn")
self.iot_backend.remove_thing_from_thing_group(
thing_group_name=thing_group_name,
thing_group_arn=thing_group_arn,
thing_name=thing_name,
thing_arn=thing_arn,
)
return json.dumps(dict())
def list_things_in_thing_group(self):
thing_group_name = self._get_param("thingGroupName")
recursive = self._get_param("recursive")
# next_token = self._get_param("nextToken")
# max_results = self._get_int_param("maxResults")
things = self.iot_backend.list_things_in_thing_group(
thing_group_name=thing_group_name,
recursive=recursive,
)
next_token = None
thing_names = [_.thing_name for _ in things]
# TODO: implement pagination in the future
return json.dumps(dict(things=thing_names, nextToken=next_token))
def list_thing_groups_for_thing(self):
thing_name = self._get_param("thingName")
# next_token = self._get_param("nextToken")
# max_results = self._get_int_param("maxResults")
thing_groups = self.iot_backend.list_thing_groups_for_thing(
thing_name=thing_name
)
next_token = None
# TODO: implement pagination in the future
return json.dumps(dict(thingGroups=thing_groups, nextToken=next_token))
def update_thing_groups_for_thing(self):
thing_name = self._get_param("thingName")
thing_groups_to_add = self._get_param("thingGroupsToAdd") or []
thing_groups_to_remove = self._get_param("thingGroupsToRemove") or []
self.iot_backend.update_thing_groups_for_thing(
thing_name=thing_name,
thing_groups_to_add=thing_groups_to_add,
thing_groups_to_remove=thing_groups_to_remove,
)
return json.dumps(dict())

View File

@ -17,7 +17,7 @@ class ResourceNotFoundError(BadRequest):
class ResourceInUseError(BadRequest): class ResourceInUseError(BadRequest):
def __init__(self, message): def __init__(self, message):
super(ResourceNotFoundError, self).__init__() super(ResourceInUseError, self).__init__()
self.description = json.dumps({ self.description = json.dumps({
"message": message, "message": message,
'__type': 'ResourceInUseException', '__type': 'ResourceInUseException',

33
moto/logs/exceptions.py Normal file
View File

@ -0,0 +1,33 @@
from __future__ import unicode_literals
from moto.core.exceptions import JsonRESTError
class LogsClientError(JsonRESTError):
code = 400
class ResourceNotFoundException(LogsClientError):
def __init__(self):
self.code = 400
super(ResourceNotFoundException, self).__init__(
"ResourceNotFoundException",
"The specified resource does not exist"
)
class InvalidParameterException(LogsClientError):
def __init__(self, msg=None):
self.code = 400
super(InvalidParameterException, self).__init__(
"InvalidParameterException",
msg or "A parameter is specified incorrectly."
)
class ResourceAlreadyExistsException(LogsClientError):
def __init__(self):
self.code = 400
super(ResourceAlreadyExistsException, self).__init__(
'ResourceAlreadyExistsException',
'The specified resource already exists.'
)

View File

@ -1,6 +1,10 @@
from moto.core import BaseBackend from moto.core import BaseBackend
import boto.logs import boto.logs
from moto.core.utils import unix_time_millis from moto.core.utils import unix_time_millis
from .exceptions import (
ResourceNotFoundException,
ResourceAlreadyExistsException
)
class LogEvent: class LogEvent:
@ -49,23 +53,29 @@ class LogStream:
self.__class__._log_ids += 1 self.__class__._log_ids += 1
def _update(self): def _update(self):
self.firstEventTimestamp = min([x.timestamp for x in self.events]) # events can be empty when stream is described soon after creation
self.lastEventTimestamp = max([x.timestamp for x in self.events]) self.firstEventTimestamp = min([x.timestamp for x in self.events]) if self.events else None
self.lastEventTimestamp = max([x.timestamp for x in self.events]) if self.events else None
def to_describe_dict(self): def to_describe_dict(self):
# Compute start and end times # Compute start and end times
self._update() self._update()
return { res = {
"arn": self.arn, "arn": self.arn,
"creationTime": self.creationTime, "creationTime": self.creationTime,
"firstEventTimestamp": self.firstEventTimestamp,
"lastEventTimestamp": self.lastEventTimestamp,
"lastIngestionTime": self.lastIngestionTime,
"logStreamName": self.logStreamName, "logStreamName": self.logStreamName,
"storedBytes": self.storedBytes, "storedBytes": self.storedBytes,
"uploadSequenceToken": str(self.uploadSequenceToken),
} }
if self.events:
rest = {
"firstEventTimestamp": self.firstEventTimestamp,
"lastEventTimestamp": self.lastEventTimestamp,
"lastIngestionTime": self.lastIngestionTime,
"uploadSequenceToken": str(self.uploadSequenceToken),
}
res.update(rest)
return res
def put_log_events(self, log_group_name, log_stream_name, log_events, sequence_token): def put_log_events(self, log_group_name, log_stream_name, log_events, sequence_token):
# TODO: ensure sequence_token # TODO: ensure sequence_token
@ -126,18 +136,22 @@ class LogGroup:
self.streams = dict() # {name: LogStream} self.streams = dict() # {name: LogStream}
def create_log_stream(self, log_stream_name): def create_log_stream(self, log_stream_name):
assert log_stream_name not in self.streams if log_stream_name in self.streams:
raise ResourceAlreadyExistsException()
self.streams[log_stream_name] = LogStream(self.region, self.name, log_stream_name) self.streams[log_stream_name] = LogStream(self.region, self.name, log_stream_name)
def delete_log_stream(self, log_stream_name): def delete_log_stream(self, log_stream_name):
assert log_stream_name in self.streams if log_stream_name not in self.streams:
raise ResourceNotFoundException()
del self.streams[log_stream_name] del self.streams[log_stream_name]
def describe_log_streams(self, descending, limit, log_group_name, log_stream_name_prefix, next_token, order_by): def describe_log_streams(self, descending, limit, log_group_name, log_stream_name_prefix, next_token, order_by):
# responses only logStreamName, creationTime, arn, storedBytes when no events are stored.
log_streams = [(name, stream.to_describe_dict()) for name, stream in self.streams.items() if name.startswith(log_stream_name_prefix)] log_streams = [(name, stream.to_describe_dict()) for name, stream in self.streams.items() if name.startswith(log_stream_name_prefix)]
def sorter(item): def sorter(item):
return item[0] if order_by == 'logStreamName' else item[1]['lastEventTimestamp'] return item[0] if order_by == 'logStreamName' else item[1].get('lastEventTimestamp', 0)
if next_token is None: if next_token is None:
next_token = 0 next_token = 0
@ -151,18 +165,18 @@ class LogGroup:
return log_streams_page, new_token return log_streams_page, new_token
def put_log_events(self, log_group_name, log_stream_name, log_events, sequence_token): def put_log_events(self, log_group_name, log_stream_name, log_events, sequence_token):
assert log_stream_name in self.streams if log_stream_name not in self.streams:
raise ResourceNotFoundException()
stream = self.streams[log_stream_name] stream = self.streams[log_stream_name]
return stream.put_log_events(log_group_name, log_stream_name, log_events, sequence_token) return stream.put_log_events(log_group_name, log_stream_name, log_events, sequence_token)
def get_log_events(self, log_group_name, log_stream_name, start_time, end_time, limit, next_token, start_from_head): def get_log_events(self, log_group_name, log_stream_name, start_time, end_time, limit, next_token, start_from_head):
assert log_stream_name in self.streams if log_stream_name not in self.streams:
raise ResourceNotFoundException()
stream = self.streams[log_stream_name] stream = self.streams[log_stream_name]
return stream.get_log_events(log_group_name, log_stream_name, start_time, end_time, limit, next_token, start_from_head) return stream.get_log_events(log_group_name, log_stream_name, start_time, end_time, limit, next_token, start_from_head)
def filter_log_events(self, log_group_name, log_stream_names, start_time, end_time, limit, next_token, filter_pattern, interleaved): def filter_log_events(self, log_group_name, log_stream_names, start_time, end_time, limit, next_token, filter_pattern, interleaved):
assert not filter_pattern # TODO: impl
streams = [stream for name, stream in self.streams.items() if not log_stream_names or name in log_stream_names] streams = [stream for name, stream in self.streams.items() if not log_stream_names or name in log_stream_names]
events = [] events = []
@ -170,7 +184,7 @@ class LogGroup:
events += stream.filter_log_events(log_group_name, log_stream_names, start_time, end_time, limit, next_token, filter_pattern, interleaved) events += stream.filter_log_events(log_group_name, log_stream_names, start_time, end_time, limit, next_token, filter_pattern, interleaved)
if interleaved: if interleaved:
events = sorted(events, key=lambda event: event.timestamp) events = sorted(events, key=lambda event: event['timestamp'])
if next_token is None: if next_token is None:
next_token = 0 next_token = 0
@ -195,7 +209,8 @@ class LogsBackend(BaseBackend):
self.__init__(region_name) self.__init__(region_name)
def create_log_group(self, log_group_name, tags): def create_log_group(self, log_group_name, tags):
assert log_group_name not in self.groups if log_group_name in self.groups:
raise ResourceAlreadyExistsException()
self.groups[log_group_name] = LogGroup(self.region_name, log_group_name, tags) self.groups[log_group_name] = LogGroup(self.region_name, log_group_name, tags)
def ensure_log_group(self, log_group_name, tags): def ensure_log_group(self, log_group_name, tags):
@ -204,37 +219,44 @@ class LogsBackend(BaseBackend):
self.groups[log_group_name] = LogGroup(self.region_name, log_group_name, tags) self.groups[log_group_name] = LogGroup(self.region_name, log_group_name, tags)
def delete_log_group(self, log_group_name): def delete_log_group(self, log_group_name):
assert log_group_name in self.groups if log_group_name not in self.groups:
raise ResourceNotFoundException()
del self.groups[log_group_name] del self.groups[log_group_name]
def create_log_stream(self, log_group_name, log_stream_name): def create_log_stream(self, log_group_name, log_stream_name):
assert log_group_name in self.groups if log_group_name not in self.groups:
raise ResourceNotFoundException()
log_group = self.groups[log_group_name] log_group = self.groups[log_group_name]
return log_group.create_log_stream(log_stream_name) return log_group.create_log_stream(log_stream_name)
def delete_log_stream(self, log_group_name, log_stream_name): def delete_log_stream(self, log_group_name, log_stream_name):
assert log_group_name in self.groups if log_group_name not in self.groups:
raise ResourceNotFoundException()
log_group = self.groups[log_group_name] log_group = self.groups[log_group_name]
return log_group.delete_log_stream(log_stream_name) return log_group.delete_log_stream(log_stream_name)
def describe_log_streams(self, descending, limit, log_group_name, log_stream_name_prefix, next_token, order_by): def describe_log_streams(self, descending, limit, log_group_name, log_stream_name_prefix, next_token, order_by):
assert log_group_name in self.groups if log_group_name not in self.groups:
raise ResourceNotFoundException()
log_group = self.groups[log_group_name] log_group = self.groups[log_group_name]
return log_group.describe_log_streams(descending, limit, log_group_name, log_stream_name_prefix, next_token, order_by) return log_group.describe_log_streams(descending, limit, log_group_name, log_stream_name_prefix, next_token, order_by)
def put_log_events(self, log_group_name, log_stream_name, log_events, sequence_token): def put_log_events(self, log_group_name, log_stream_name, log_events, sequence_token):
# TODO: add support for sequence_tokens # TODO: add support for sequence_tokens
assert log_group_name in self.groups if log_group_name not in self.groups:
raise ResourceNotFoundException()
log_group = self.groups[log_group_name] log_group = self.groups[log_group_name]
return log_group.put_log_events(log_group_name, log_stream_name, log_events, sequence_token) return log_group.put_log_events(log_group_name, log_stream_name, log_events, sequence_token)
def get_log_events(self, log_group_name, log_stream_name, start_time, end_time, limit, next_token, start_from_head): def get_log_events(self, log_group_name, log_stream_name, start_time, end_time, limit, next_token, start_from_head):
assert log_group_name in self.groups if log_group_name not in self.groups:
raise ResourceNotFoundException()
log_group = self.groups[log_group_name] log_group = self.groups[log_group_name]
return log_group.get_log_events(log_group_name, log_stream_name, start_time, end_time, limit, next_token, start_from_head) return log_group.get_log_events(log_group_name, log_stream_name, start_time, end_time, limit, next_token, start_from_head)
def filter_log_events(self, log_group_name, log_stream_names, start_time, end_time, limit, next_token, filter_pattern, interleaved): def filter_log_events(self, log_group_name, log_stream_names, start_time, end_time, limit, next_token, filter_pattern, interleaved):
assert log_group_name in self.groups if log_group_name not in self.groups:
raise ResourceNotFoundException()
log_group = self.groups[log_group_name] log_group = self.groups[log_group_name]
return log_group.filter_log_events(log_group_name, log_stream_names, start_time, end_time, limit, next_token, filter_pattern, interleaved) return log_group.filter_log_events(log_group_name, log_stream_names, start_time, end_time, limit, next_token, filter_pattern, interleaved)

View File

@ -87,9 +87,8 @@ class LogsResponse(BaseResponse):
events, next_backward_token, next_foward_token = \ events, next_backward_token, next_foward_token = \
self.logs_backend.get_log_events(log_group_name, log_stream_name, start_time, end_time, limit, next_token, start_from_head) self.logs_backend.get_log_events(log_group_name, log_stream_name, start_time, end_time, limit, next_token, start_from_head)
return json.dumps({ return json.dumps({
"events": [ob.__dict__ for ob in events], "events": events,
"nextBackwardToken": next_backward_token, "nextBackwardToken": next_backward_token,
"nextForwardToken": next_foward_token "nextForwardToken": next_foward_token
}) })

View File

@ -398,11 +398,82 @@ class Stack(BaseModel):
return response return response
class App(BaseModel):
def __init__(self, stack_id, name, type,
shortname=None,
description=None,
datasources=None,
app_source=None,
domains=None,
enable_ssl=False,
ssl_configuration=None,
attributes=None,
environment=None):
self.stack_id = stack_id
self.name = name
self.type = type
self.shortname = shortname
self.description = description
self.datasources = datasources
if datasources is None:
self.datasources = []
self.app_source = app_source
if app_source is None:
self.app_source = {}
self.domains = domains
if domains is None:
self.domains = []
self.enable_ssl = enable_ssl
self.ssl_configuration = ssl_configuration
if ssl_configuration is None:
self.ssl_configuration = {}
self.attributes = attributes
if attributes is None:
self.attributes = {}
self.environment = environment
if environment is None:
self.environment = {}
self.id = "{0}".format(uuid.uuid4())
self.created_at = datetime.datetime.utcnow()
def __eq__(self, other):
return self.id == other.id
def to_dict(self):
d = {
"AppId": self.id,
"AppSource": self.app_source,
"Attributes": self.attributes,
"CreatedAt": self.created_at.isoformat(),
"Datasources": self.datasources,
"Description": self.description,
"Domains": self.domains,
"EnableSsl": self.enable_ssl,
"Environment": self.environment,
"Name": self.name,
"Shortname": self.shortname,
"SslConfiguration": self.ssl_configuration,
"StackId": self.stack_id,
"Type": self.type
}
return d
class OpsWorksBackend(BaseBackend): class OpsWorksBackend(BaseBackend):
def __init__(self, ec2_backend): def __init__(self, ec2_backend):
self.stacks = {} self.stacks = {}
self.layers = {} self.layers = {}
self.apps = {}
self.instances = {} self.instances = {}
self.ec2_backend = ec2_backend self.ec2_backend = ec2_backend
@ -435,6 +506,20 @@ class OpsWorksBackend(BaseBackend):
self.stacks[stackid].layers.append(layer) self.stacks[stackid].layers.append(layer)
return layer return layer
def create_app(self, **kwargs):
name = kwargs['name']
stackid = kwargs['stack_id']
if stackid not in self.stacks:
raise ResourceNotFoundException(stackid)
if name in [a.name for a in self.stacks[stackid].apps]:
raise ValidationException(
'There is already an app named "{0}" '
'for this stack'.format(name))
app = App(**kwargs)
self.apps[app.id] = app
self.stacks[stackid].apps.append(app)
return app
def create_instance(self, **kwargs): def create_instance(self, **kwargs):
stack_id = kwargs['stack_id'] stack_id = kwargs['stack_id']
layer_ids = kwargs['layer_ids'] layer_ids = kwargs['layer_ids']
@ -502,6 +587,22 @@ class OpsWorksBackend(BaseBackend):
raise ResourceNotFoundException(", ".join(unknown_layers)) raise ResourceNotFoundException(", ".join(unknown_layers))
return [self.layers[id].to_dict() for id in layer_ids] return [self.layers[id].to_dict() for id in layer_ids]
def describe_apps(self, stack_id, app_ids):
if stack_id is not None and app_ids is not None:
raise ValidationException(
"Please provide one or more app IDs or a stack ID"
)
if stack_id is not None:
if stack_id not in self.stacks:
raise ResourceNotFoundException(
"Unable to find stack with ID {0}".format(stack_id))
return [app.to_dict() for app in self.stacks[stack_id].apps]
unknown_apps = set(app_ids) - set(self.apps.keys())
if unknown_apps:
raise ResourceNotFoundException(", ".join(unknown_apps))
return [self.apps[id].to_dict() for id in app_ids]
def describe_instances(self, instance_ids, layer_id, stack_id): def describe_instances(self, instance_ids, layer_id, stack_id):
if len(list(filter(None, (instance_ids, layer_id, stack_id)))) != 1: if len(list(filter(None, (instance_ids, layer_id, stack_id)))) != 1:
raise ValidationException("Please provide either one or more " raise ValidationException("Please provide either one or more "

View File

@ -75,6 +75,24 @@ class OpsWorksResponse(BaseResponse):
layer = self.opsworks_backend.create_layer(**kwargs) layer = self.opsworks_backend.create_layer(**kwargs)
return json.dumps({"LayerId": layer.id}, indent=1) return json.dumps({"LayerId": layer.id}, indent=1)
def create_app(self):
kwargs = dict(
stack_id=self.parameters.get('StackId'),
name=self.parameters.get('Name'),
type=self.parameters.get('Type'),
shortname=self.parameters.get('Shortname'),
description=self.parameters.get('Description'),
datasources=self.parameters.get('DataSources'),
app_source=self.parameters.get('AppSource'),
domains=self.parameters.get('Domains'),
enable_ssl=self.parameters.get('EnableSsl'),
ssl_configuration=self.parameters.get('SslConfiguration'),
attributes=self.parameters.get('Attributes'),
environment=self.parameters.get('Environment')
)
app = self.opsworks_backend.create_app(**kwargs)
return json.dumps({"AppId": app.id}, indent=1)
def create_instance(self): def create_instance(self):
kwargs = dict( kwargs = dict(
stack_id=self.parameters.get("StackId"), stack_id=self.parameters.get("StackId"),
@ -110,6 +128,12 @@ class OpsWorksResponse(BaseResponse):
layers = self.opsworks_backend.describe_layers(stack_id, layer_ids) layers = self.opsworks_backend.describe_layers(stack_id, layer_ids)
return json.dumps({"Layers": layers}, indent=1) return json.dumps({"Layers": layers}, indent=1)
def describe_apps(self):
stack_id = self.parameters.get("StackId")
app_ids = self.parameters.get("AppIds")
apps = self.opsworks_backend.describe_apps(stack_id, app_ids)
return json.dumps({"Apps": apps}, indent=1)
def describe_instances(self): def describe_instances(self):
instance_ids = self.parameters.get("InstanceIds") instance_ids = self.parameters.get("InstanceIds")
layer_id = self.parameters.get("LayerId") layer_id = self.parameters.get("LayerId")

View File

@ -1,12 +0,0 @@
.arcconfig
.coverage
.DS_Store
.idea
*.db
*.egg-info
*.pyc
/htmlcov
/dist
/build
/.cache
/.tox

View File

@ -1,27 +0,0 @@
language: python
sudo: false
python:
- "2.6"
- "2.7"
- "3.3"
- "3.4"
- "3.5"
cache:
directories:
- .pip_download_cache
env:
matrix:
- REQUESTS=requests==2.0
- REQUESTS=-U requests
- REQUESTS="-e git+git://github.com/kennethreitz/requests.git#egg=requests"
global:
- PIP_DOWNLOAD_CACHE=".pip_download_cache"
matrix:
allow_failures:
- env: 'REQUESTS="-e git+git://github.com/kennethreitz/requests.git#egg=requests"'
install:
- "pip install ${REQUESTS}"
- make develop
script:
- if [[ $TRAVIS_PYTHON_VERSION != 2.6 ]]; then make lint; fi
- py.test . --cov responses --cov-report term-missing

View File

@ -1,32 +0,0 @@
Unreleased
----------
- Allow empty list/dict as json object (GH-100)
0.5.1
-----
- Add LICENSE, README and CHANGES to the PyPI distribution (GH-97).
0.5.0
-----
- Allow passing a JSON body to `response.add` (GH-82)
- Improve ConnectionError emulation (GH-73)
- Correct assertion in assert_all_requests_are_fired (GH-71)
0.4.0
-----
- Requests 2.0+ is required
- Mocking now happens on the adapter instead of the session
0.3.0
-----
- Add the ability to mock errors (GH-22)
- Add responses.mock context manager (GH-36)
- Support custom adapters (GH-33)
- Add support for regexp error matching (GH-25)
- Add support for dynamic bodies via `responses.add_callback` (GH-24)
- Preserve argspec when using `responses.activate` decorator (GH-18)

View File

@ -1,201 +0,0 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright 2015 David Cramer
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -1,2 +0,0 @@
include README.rst CHANGES LICENSE
global-exclude *~

View File

@ -1,16 +0,0 @@
develop:
pip install -e .
make install-test-requirements
install-test-requirements:
pip install "file://`pwd`#egg=responses[tests]"
test: develop lint
@echo "Running Python tests"
py.test .
@echo ""
lint:
@echo "Linting Python files"
PYFLAKES_NODOCTEST=1 flake8 .
@echo ""

View File

@ -1,190 +0,0 @@
Responses
=========
.. image:: https://travis-ci.org/getsentry/responses.svg?branch=master
:target: https://travis-ci.org/getsentry/responses
A utility library for mocking out the `requests` Python library.
.. note:: Responses requires Requests >= 2.0
Response body as string
-----------------------
.. code-block:: python
import responses
import requests
@responses.activate
def test_my_api():
responses.add(responses.GET, 'http://twitter.com/api/1/foobar',
body='{"error": "not found"}', status=404,
content_type='application/json')
resp = requests.get('http://twitter.com/api/1/foobar')
assert resp.json() == {"error": "not found"}
assert len(responses.calls) == 1
assert responses.calls[0].request.url == 'http://twitter.com/api/1/foobar'
assert responses.calls[0].response.text == '{"error": "not found"}'
You can also specify a JSON object instead of a body string.
.. code-block:: python
import responses
import requests
@responses.activate
def test_my_api():
responses.add(responses.GET, 'http://twitter.com/api/1/foobar',
json={"error": "not found"}, status=404)
resp = requests.get('http://twitter.com/api/1/foobar')
assert resp.json() == {"error": "not found"}
assert len(responses.calls) == 1
assert responses.calls[0].request.url == 'http://twitter.com/api/1/foobar'
assert responses.calls[0].response.text == '{"error": "not found"}'
Request callback
----------------
.. code-block:: python
import json
import responses
import requests
@responses.activate
def test_calc_api():
def request_callback(request):
payload = json.loads(request.body)
resp_body = {'value': sum(payload['numbers'])}
headers = {'request-id': '728d329e-0e86-11e4-a748-0c84dc037c13'}
return (200, headers, json.dumps(resp_body))
responses.add_callback(
responses.POST, 'http://calc.com/sum',
callback=request_callback,
content_type='application/json',
)
resp = requests.post(
'http://calc.com/sum',
json.dumps({'numbers': [1, 2, 3]}),
headers={'content-type': 'application/json'},
)
assert resp.json() == {'value': 6}
assert len(responses.calls) == 1
assert responses.calls[0].request.url == 'http://calc.com/sum'
assert responses.calls[0].response.text == '{"value": 6}'
assert (
responses.calls[0].response.headers['request-id'] ==
'728d329e-0e86-11e4-a748-0c84dc037c13'
)
Instead of passing a string URL into `responses.add` or `responses.add_callback`
you can also supply a compiled regular expression.
.. code-block:: python
import re
import responses
import requests
# Instead of
responses.add(responses.GET, 'http://twitter.com/api/1/foobar',
body='{"error": "not found"}', status=404,
content_type='application/json')
# You can do the following
url_re = re.compile(r'https?://twitter\.com/api/\d+/foobar')
responses.add(responses.GET, url_re,
body='{"error": "not found"}', status=404,
content_type='application/json')
A response can also throw an exception as follows.
.. code-block:: python
import responses
import requests
from requests.exceptions import HTTPError
exception = HTTPError('Something went wrong')
responses.add(responses.GET, 'http://twitter.com/api/1/foobar',
body=exception)
# All calls to 'http://twitter.com/api/1/foobar' will throw exception.
Responses as a context manager
------------------------------
.. code-block:: python
import responses
import requests
def test_my_api():
with responses.RequestsMock() as rsps:
rsps.add(responses.GET, 'http://twitter.com/api/1/foobar',
body='{}', status=200,
content_type='application/json')
resp = requests.get('http://twitter.com/api/1/foobar')
assert resp.status_code == 200
# outside the context manager requests will hit the remote server
resp = requests.get('http://twitter.com/api/1/foobar')
resp.status_code == 404
Assertions on declared responses
--------------------------------
When used as a context manager, Responses will, by default, raise an assertion
error if a url was registered but not accessed. This can be disabled by passing
the ``assert_all_requests_are_fired`` value:
.. code-block:: python
import responses
import requests
def test_my_api():
with responses.RequestsMock(assert_all_requests_are_fired=False) as rsps:
rsps.add(responses.GET, 'http://twitter.com/api/1/foobar',
body='{}', status=200,
content_type='application/json')
Multiple Responses
------------------
You can also use ``assert_all_requests_are_fired`` to add multiple responses for the same url:
.. code-block:: python
import responses
import requests
def test_my_api():
with responses.RequestsMock(assert_all_requests_are_fired=True) as rsps:
rsps.add(responses.GET, 'http://twitter.com/api/1/foobar', status=500)
rsps.add(responses.GET, 'http://twitter.com/api/1/foobar',
body='{}', status=200,
content_type='application/json')
resp = requests.get('http://twitter.com/api/1/foobar')
assert resp.status_code == 500
resp = requests.get('http://twitter.com/api/1/foobar')
assert resp.status_code == 200

View File

@ -1,330 +0,0 @@
from __future__ import (
absolute_import, print_function, division, unicode_literals
)
import inspect
import json as json_module
import re
import six
from collections import namedtuple, Sequence, Sized
from functools import update_wrapper
from cookies import Cookies
from requests.adapters import HTTPAdapter
from requests.utils import cookiejar_from_dict
from requests.exceptions import ConnectionError
from requests.sessions import REDIRECT_STATI
try:
from requests.packages.urllib3.response import HTTPResponse
except ImportError:
from urllib3.response import HTTPResponse
if six.PY2:
from urlparse import urlparse, parse_qsl
else:
from urllib.parse import urlparse, parse_qsl
if six.PY2:
try:
from six import cStringIO as BufferIO
except ImportError:
from six import StringIO as BufferIO
else:
from io import BytesIO as BufferIO
Call = namedtuple('Call', ['request', 'response'])
_wrapper_template = """\
def wrapper%(signature)s:
with responses:
return func%(funcargs)s
"""
def _is_string(s):
return isinstance(s, (six.string_types, six.text_type))
def _is_redirect(response):
try:
# 2.0.0 <= requests <= 2.2
return response.is_redirect
except AttributeError:
# requests > 2.2
return (
# use request.sessions conditional
response.status_code in REDIRECT_STATI and
'location' in response.headers
)
def get_wrapped(func, wrapper_template, evaldict):
# Preserve the argspec for the wrapped function so that testing
# tools such as pytest can continue to use their fixture injection.
args, a, kw, defaults = inspect.getargspec(func)
signature = inspect.formatargspec(args, a, kw, defaults)
is_bound_method = hasattr(func, '__self__')
if is_bound_method:
args = args[1:] # Omit 'self'
callargs = inspect.formatargspec(args, a, kw, None)
ctx = {'signature': signature, 'funcargs': callargs}
six.exec_(wrapper_template % ctx, evaldict)
wrapper = evaldict['wrapper']
update_wrapper(wrapper, func)
if is_bound_method:
wrapper = wrapper.__get__(func.__self__, type(func.__self__))
return wrapper
class CallList(Sequence, Sized):
def __init__(self):
self._calls = []
def __iter__(self):
return iter(self._calls)
def __len__(self):
return len(self._calls)
def __getitem__(self, idx):
return self._calls[idx]
def add(self, request, response):
self._calls.append(Call(request, response))
def reset(self):
self._calls = []
def _ensure_url_default_path(url, match_querystring):
if _is_string(url) and url.count('/') == 2:
if match_querystring:
return url.replace('?', '/?', 1)
else:
return url + '/'
return url
class RequestsMock(object):
DELETE = 'DELETE'
GET = 'GET'
HEAD = 'HEAD'
OPTIONS = 'OPTIONS'
PATCH = 'PATCH'
POST = 'POST'
PUT = 'PUT'
def __init__(self, assert_all_requests_are_fired=True, pass_through=True):
self._calls = CallList()
self.reset()
self.assert_all_requests_are_fired = assert_all_requests_are_fired
self.pass_through = pass_through
self.original_send = HTTPAdapter.send
def reset(self):
self._urls = []
self._calls.reset()
def add(self, method, url, body='', match_querystring=False,
status=200, adding_headers=None, stream=False,
content_type='text/plain', json=None):
# if we were passed a `json` argument,
# override the body and content_type
if json is not None:
body = json_module.dumps(json)
content_type = 'application/json'
# ensure the url has a default path set if the url is a string
url = _ensure_url_default_path(url, match_querystring)
# body must be bytes
if isinstance(body, six.text_type):
body = body.encode('utf-8')
self._urls.append({
'url': url,
'method': method,
'body': body,
'content_type': content_type,
'match_querystring': match_querystring,
'status': status,
'adding_headers': adding_headers,
'stream': stream,
})
def add_callback(self, method, url, callback, match_querystring=False,
content_type='text/plain'):
# ensure the url has a default path set if the url is a string
# url = _ensure_url_default_path(url, match_querystring)
self._urls.append({
'url': url,
'method': method,
'callback': callback,
'content_type': content_type,
'match_querystring': match_querystring,
})
@property
def calls(self):
return self._calls
def __enter__(self):
self.start()
return self
def __exit__(self, type, value, traceback):
success = type is None
self.stop(allow_assert=success)
self.reset()
return success
def activate(self, func):
evaldict = {'responses': self, 'func': func}
return get_wrapped(func, _wrapper_template, evaldict)
def _find_match(self, request):
for match in self._urls:
if request.method != match['method']:
continue
if not self._has_url_match(match, request.url):
continue
break
else:
return None
if self.assert_all_requests_are_fired:
# for each found match remove the url from the stack
self._urls.remove(match)
return match
def _has_url_match(self, match, request_url):
url = match['url']
if not match['match_querystring']:
request_url = request_url.split('?', 1)[0]
if _is_string(url):
if match['match_querystring']:
return self._has_strict_url_match(url, request_url)
else:
return url == request_url
elif isinstance(url, re._pattern_type) and url.match(request_url):
return True
else:
return False
def _has_strict_url_match(self, url, other):
url_parsed = urlparse(url)
other_parsed = urlparse(other)
if url_parsed[:3] != other_parsed[:3]:
return False
url_qsl = sorted(parse_qsl(url_parsed.query))
other_qsl = sorted(parse_qsl(other_parsed.query))
return url_qsl == other_qsl
def _on_request(self, adapter, request, **kwargs):
match = self._find_match(request)
# TODO(dcramer): find the correct class for this
if match is None:
if self.pass_through:
return self.original_send(adapter, request, **kwargs)
error_msg = 'Connection refused: {0} {1}'.format(request.method,
request.url)
response = ConnectionError(error_msg)
response.request = request
self._calls.add(request, response)
raise response
if 'body' in match and isinstance(match['body'], Exception):
self._calls.add(request, match['body'])
raise match['body']
headers = {}
if match['content_type'] is not None:
headers['Content-Type'] = match['content_type']
if 'callback' in match: # use callback
status, r_headers, body = match['callback'](request)
if isinstance(body, six.text_type):
body = body.encode('utf-8')
body = BufferIO(body)
headers.update(r_headers)
elif 'body' in match:
if match['adding_headers']:
headers.update(match['adding_headers'])
status = match['status']
body = BufferIO(match['body'])
response = HTTPResponse(
status=status,
reason=six.moves.http_client.responses[status],
body=body,
headers=headers,
preload_content=False,
# Need to not decode_content to mimic requests
decode_content=False,
)
response = adapter.build_response(request, response)
if not match.get('stream'):
response.content # NOQA
try:
resp_cookies = Cookies.from_request(response.headers['set-cookie'])
response.cookies = cookiejar_from_dict(dict(
(v.name, v.value)
for _, v
in resp_cookies.items()
))
except (KeyError, TypeError):
pass
self._calls.add(request, response)
return response
def start(self):
try:
from unittest import mock
except ImportError:
import mock
def unbound_on_send(adapter, request, *a, **kwargs):
return self._on_request(adapter, request, *a, **kwargs)
self._patcher1 = mock.patch('botocore.vendored.requests.adapters.HTTPAdapter.send',
unbound_on_send)
self._patcher1.start()
self._patcher2 = mock.patch('requests.adapters.HTTPAdapter.send',
unbound_on_send)
self._patcher2.start()
def stop(self, allow_assert=True):
self._patcher1.stop()
self._patcher2.stop()
if allow_assert and self.assert_all_requests_are_fired and self._urls:
raise AssertionError(
'Not all requests have been executed {0!r}'.format(
[(url['method'], url['url']) for url in self._urls]))
# expose default mock namespace
mock = _default_mock = RequestsMock(assert_all_requests_are_fired=False, pass_through=False)
__all__ = []
for __attr in (a for a in dir(_default_mock) if not a.startswith('_')):
__all__.append(__attr)
globals()[__attr] = getattr(_default_mock, __attr)

View File

@ -1,5 +0,0 @@
[pytest]
addopts=--tb=short
[bdist_wheel]
universal=1

View File

@ -1,99 +0,0 @@
#!/usr/bin/env python
"""
responses
=========
A utility library for mocking out the `requests` Python library.
:copyright: (c) 2015 David Cramer
:license: Apache 2.0
"""
import sys
import logging
from setuptools import setup
from setuptools.command.test import test as TestCommand
import pkg_resources
setup_requires = []
if 'test' in sys.argv:
setup_requires.append('pytest')
install_requires = [
'requests>=2.0',
'cookies',
'six',
]
tests_require = [
'pytest',
'coverage >= 3.7.1, < 5.0.0',
'pytest-cov',
'flake8',
]
extras_require = {
':python_version in "2.6, 2.7, 3.2"': ['mock'],
'tests': tests_require,
}
try:
if 'bdist_wheel' not in sys.argv:
for key, value in extras_require.items():
if key.startswith(':') and pkg_resources.evaluate_marker(key[1:]):
install_requires.extend(value)
except Exception:
logging.getLogger(__name__).exception(
'Something went wrong calculating platform specific dependencies, so '
"you're getting them all!"
)
for key, value in extras_require.items():
if key.startswith(':'):
install_requires.extend(value)
class PyTest(TestCommand):
def finalize_options(self):
TestCommand.finalize_options(self)
self.test_args = ['test_responses.py']
self.test_suite = True
def run_tests(self):
# import here, cause outside the eggs aren't loaded
import pytest
errno = pytest.main(self.test_args)
sys.exit(errno)
setup(
name='responses',
version='0.6.0',
author='David Cramer',
description=(
'A utility library for mocking out the `requests` Python library.'
),
url='https://github.com/getsentry/responses',
license='Apache 2.0',
long_description=open('README.rst').read(),
py_modules=['responses', 'test_responses'],
zip_safe=False,
install_requires=install_requires,
extras_require=extras_require,
tests_require=tests_require,
setup_requires=setup_requires,
cmdclass={'test': PyTest},
include_package_data=True,
classifiers=[
'Intended Audience :: Developers',
'Intended Audience :: System Administrators',
'Operating System :: OS Independent',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 3',
'Topic :: Software Development'
],
)

View File

@ -1,444 +0,0 @@
from __future__ import (
absolute_import, print_function, division, unicode_literals
)
import re
import requests
import responses
import pytest
from inspect import getargspec
from requests.exceptions import ConnectionError, HTTPError
def assert_reset():
assert len(responses._default_mock._urls) == 0
assert len(responses.calls) == 0
def assert_response(resp, body=None, content_type='text/plain'):
assert resp.status_code == 200
assert resp.reason == 'OK'
if content_type is not None:
assert resp.headers['Content-Type'] == content_type
else:
assert 'Content-Type' not in resp.headers
assert resp.text == body
def test_response():
@responses.activate
def run():
responses.add(responses.GET, 'http://example.com', body=b'test')
resp = requests.get('http://example.com')
assert_response(resp, 'test')
assert len(responses.calls) == 1
assert responses.calls[0].request.url == 'http://example.com/'
assert responses.calls[0].response.content == b'test'
resp = requests.get('http://example.com?foo=bar')
assert_response(resp, 'test')
assert len(responses.calls) == 2
assert responses.calls[1].request.url == 'http://example.com/?foo=bar'
assert responses.calls[1].response.content == b'test'
run()
assert_reset()
def test_connection_error():
@responses.activate
def run():
responses.add(responses.GET, 'http://example.com')
with pytest.raises(ConnectionError):
requests.get('http://example.com/foo')
assert len(responses.calls) == 1
assert responses.calls[0].request.url == 'http://example.com/foo'
assert type(responses.calls[0].response) is ConnectionError
assert responses.calls[0].response.request
run()
assert_reset()
def test_match_querystring():
@responses.activate
def run():
url = 'http://example.com?test=1&foo=bar'
responses.add(
responses.GET, url,
match_querystring=True, body=b'test')
resp = requests.get('http://example.com?test=1&foo=bar')
assert_response(resp, 'test')
resp = requests.get('http://example.com?foo=bar&test=1')
assert_response(resp, 'test')
run()
assert_reset()
def test_match_querystring_error():
@responses.activate
def run():
responses.add(
responses.GET, 'http://example.com/?test=1',
match_querystring=True)
with pytest.raises(ConnectionError):
requests.get('http://example.com/foo/?test=2')
run()
assert_reset()
def test_match_querystring_regex():
@responses.activate
def run():
"""Note that `match_querystring` value shouldn't matter when passing a
regular expression"""
responses.add(
responses.GET, re.compile(r'http://example\.com/foo/\?test=1'),
body='test1', match_querystring=True)
resp = requests.get('http://example.com/foo/?test=1')
assert_response(resp, 'test1')
responses.add(
responses.GET, re.compile(r'http://example\.com/foo/\?test=2'),
body='test2', match_querystring=False)
resp = requests.get('http://example.com/foo/?test=2')
assert_response(resp, 'test2')
run()
assert_reset()
def test_match_querystring_error_regex():
@responses.activate
def run():
"""Note that `match_querystring` value shouldn't matter when passing a
regular expression"""
responses.add(
responses.GET, re.compile(r'http://example\.com/foo/\?test=1'),
match_querystring=True)
with pytest.raises(ConnectionError):
requests.get('http://example.com/foo/?test=3')
responses.add(
responses.GET, re.compile(r'http://example\.com/foo/\?test=2'),
match_querystring=False)
with pytest.raises(ConnectionError):
requests.get('http://example.com/foo/?test=4')
run()
assert_reset()
def test_accept_string_body():
@responses.activate
def run():
url = 'http://example.com/'
responses.add(
responses.GET, url, body='test')
resp = requests.get(url)
assert_response(resp, 'test')
run()
assert_reset()
def test_accept_json_body():
@responses.activate
def run():
content_type = 'application/json'
url = 'http://example.com/'
responses.add(
responses.GET, url, json={"message": "success"})
resp = requests.get(url)
assert_response(resp, '{"message": "success"}', content_type)
url = 'http://example.com/1/'
responses.add(responses.GET, url, json=[])
resp = requests.get(url)
assert_response(resp, '[]', content_type)
run()
assert_reset()
def test_no_content_type():
@responses.activate
def run():
url = 'http://example.com/'
responses.add(
responses.GET, url, body='test', content_type=None)
resp = requests.get(url)
assert_response(resp, 'test', content_type=None)
run()
assert_reset()
def test_throw_connection_error_explicit():
@responses.activate
def run():
url = 'http://example.com'
exception = HTTPError('HTTP Error')
responses.add(
responses.GET, url, exception)
with pytest.raises(HTTPError) as HE:
requests.get(url)
assert str(HE.value) == 'HTTP Error'
run()
assert_reset()
def test_callback():
body = b'test callback'
status = 400
reason = 'Bad Request'
headers = {'foo': 'bar'}
url = 'http://example.com/'
def request_callback(request):
return (status, headers, body)
@responses.activate
def run():
responses.add_callback(responses.GET, url, request_callback)
resp = requests.get(url)
assert resp.text == "test callback"
assert resp.status_code == status
assert resp.reason == reason
assert 'foo' in resp.headers
assert resp.headers['foo'] == 'bar'
run()
assert_reset()
def test_callback_no_content_type():
body = b'test callback'
status = 400
reason = 'Bad Request'
headers = {'foo': 'bar'}
url = 'http://example.com/'
def request_callback(request):
return (status, headers, body)
@responses.activate
def run():
responses.add_callback(
responses.GET, url, request_callback, content_type=None)
resp = requests.get(url)
assert resp.text == "test callback"
assert resp.status_code == status
assert resp.reason == reason
assert 'foo' in resp.headers
assert 'Content-Type' not in resp.headers
run()
assert_reset()
def test_regular_expression_url():
@responses.activate
def run():
url = re.compile(r'https?://(.*\.)?example.com')
responses.add(responses.GET, url, body=b'test')
resp = requests.get('http://example.com')
assert_response(resp, 'test')
resp = requests.get('https://example.com')
assert_response(resp, 'test')
resp = requests.get('https://uk.example.com')
assert_response(resp, 'test')
with pytest.raises(ConnectionError):
requests.get('https://uk.exaaample.com')
run()
assert_reset()
def test_custom_adapter():
@responses.activate
def run():
url = "http://example.com"
responses.add(responses.GET, url, body=b'test')
calls = [0]
class DummyAdapter(requests.adapters.HTTPAdapter):
def send(self, *a, **k):
calls[0] += 1
return super(DummyAdapter, self).send(*a, **k)
# Test that the adapter is actually used
session = requests.Session()
session.mount("http://", DummyAdapter())
resp = session.get(url, allow_redirects=False)
assert calls[0] == 1
# Test that the response is still correctly emulated
session = requests.Session()
session.mount("http://", DummyAdapter())
resp = session.get(url)
assert_response(resp, 'test')
run()
def test_responses_as_context_manager():
def run():
with responses.mock:
responses.add(responses.GET, 'http://example.com', body=b'test')
resp = requests.get('http://example.com')
assert_response(resp, 'test')
assert len(responses.calls) == 1
assert responses.calls[0].request.url == 'http://example.com/'
assert responses.calls[0].response.content == b'test'
resp = requests.get('http://example.com?foo=bar')
assert_response(resp, 'test')
assert len(responses.calls) == 2
assert (responses.calls[1].request.url ==
'http://example.com/?foo=bar')
assert responses.calls[1].response.content == b'test'
run()
assert_reset()
def test_activate_doesnt_change_signature():
def test_function(a, b=None):
return (a, b)
decorated_test_function = responses.activate(test_function)
assert getargspec(test_function) == getargspec(decorated_test_function)
assert decorated_test_function(1, 2) == test_function(1, 2)
assert decorated_test_function(3) == test_function(3)
def test_activate_doesnt_change_signature_for_method():
class TestCase(object):
def test_function(self, a, b=None):
return (self, a, b)
test_case = TestCase()
argspec = getargspec(test_case.test_function)
decorated_test_function = responses.activate(test_case.test_function)
assert argspec == getargspec(decorated_test_function)
assert decorated_test_function(1, 2) == test_case.test_function(1, 2)
assert decorated_test_function(3) == test_case.test_function(3)
def test_response_cookies():
body = b'test callback'
status = 200
headers = {'set-cookie': 'session_id=12345; a=b; c=d'}
url = 'http://example.com/'
def request_callback(request):
return (status, headers, body)
@responses.activate
def run():
responses.add_callback(responses.GET, url, request_callback)
resp = requests.get(url)
assert resp.text == "test callback"
assert resp.status_code == status
assert 'session_id' in resp.cookies
assert resp.cookies['session_id'] == '12345'
assert resp.cookies['a'] == 'b'
assert resp.cookies['c'] == 'd'
run()
assert_reset()
def test_assert_all_requests_are_fired():
def run():
with pytest.raises(AssertionError) as excinfo:
with responses.RequestsMock(
assert_all_requests_are_fired=True) as m:
m.add(responses.GET, 'http://example.com', body=b'test')
assert 'http://example.com' in str(excinfo.value)
assert responses.GET in str(excinfo)
# check that assert_all_requests_are_fired default to True
with pytest.raises(AssertionError):
with responses.RequestsMock() as m:
m.add(responses.GET, 'http://example.com', body=b'test')
# check that assert_all_requests_are_fired doesn't swallow exceptions
with pytest.raises(ValueError):
with responses.RequestsMock() as m:
m.add(responses.GET, 'http://example.com', body=b'test')
raise ValueError()
run()
assert_reset()
def test_allow_redirects_samehost():
redirecting_url = 'http://example.com'
final_url_path = '/1'
final_url = '{0}{1}'.format(redirecting_url, final_url_path)
url_re = re.compile(r'^http://example.com(/)?(\d+)?$')
def request_callback(request):
# endpoint of chained redirect
if request.url.endswith(final_url_path):
return 200, (), b'test'
# otherwise redirect to an integer path
else:
if request.url.endswith('/0'):
n = 1
else:
n = 0
redirect_headers = {'location': '/{0!s}'.format(n)}
return 301, redirect_headers, None
def run():
# setup redirect
with responses.mock:
responses.add_callback(responses.GET, url_re, request_callback)
resp_no_redirects = requests.get(redirecting_url,
allow_redirects=False)
assert resp_no_redirects.status_code == 301
assert len(responses.calls) == 1 # 1x300
assert responses.calls[0][1].status_code == 301
assert_reset()
with responses.mock:
responses.add_callback(responses.GET, url_re, request_callback)
resp_yes_redirects = requests.get(redirecting_url,
allow_redirects=True)
assert len(responses.calls) == 3 # 2x300 + 1x200
assert len(resp_yes_redirects.history) == 2
assert resp_yes_redirects.status_code == 200
assert final_url == resp_yes_redirects.url
status_codes = [call[1].status_code for call in responses.calls]
assert status_codes == [301, 301, 200]
assert_reset()
run()
assert_reset()

View File

@ -1,11 +0,0 @@
[tox]
envlist = {py26,py27,py32,py33,py34,py35}
[testenv]
deps =
pytest
pytest-cov
pytest-flakes
commands =
py.test . --cov responses --cov-report term-missing --flakes

View File

@ -103,6 +103,8 @@ class Database(BaseModel):
if not self.option_group_name and self.engine in self.default_option_groups: if not self.option_group_name and self.engine in self.default_option_groups:
self.option_group_name = self.default_option_groups[self.engine] self.option_group_name = self.default_option_groups[self.engine]
self.character_set_name = kwargs.get('character_set_name', None) self.character_set_name = kwargs.get('character_set_name', None)
self.iam_database_authentication_enabled = False
self.dbi_resource_id = "db-M5ENSHXFPU6XHZ4G4ZEI5QIO2U"
self.tags = kwargs.get('tags', []) self.tags = kwargs.get('tags', [])
@property @property
@ -142,6 +144,7 @@ class Database(BaseModel):
<MultiAZ>{{ database.multi_az }}</MultiAZ> <MultiAZ>{{ database.multi_az }}</MultiAZ>
<VpcSecurityGroups/> <VpcSecurityGroups/>
<DBInstanceIdentifier>{{ database.db_instance_identifier }}</DBInstanceIdentifier> <DBInstanceIdentifier>{{ database.db_instance_identifier }}</DBInstanceIdentifier>
<DbiResourceId>{{ database.dbi_resource_id }}</DbiResourceId>
<PreferredBackupWindow>03:50-04:20</PreferredBackupWindow> <PreferredBackupWindow>03:50-04:20</PreferredBackupWindow>
<PreferredMaintenanceWindow>wed:06:38-wed:07:08</PreferredMaintenanceWindow> <PreferredMaintenanceWindow>wed:06:38-wed:07:08</PreferredMaintenanceWindow>
<ReadReplicaDBInstanceIdentifiers> <ReadReplicaDBInstanceIdentifiers>
@ -163,6 +166,7 @@ class Database(BaseModel):
<ReadReplicaSourceDBInstanceIdentifier>{{ database.source_db_identifier }}</ReadReplicaSourceDBInstanceIdentifier> <ReadReplicaSourceDBInstanceIdentifier>{{ database.source_db_identifier }}</ReadReplicaSourceDBInstanceIdentifier>
{% endif %} {% endif %}
<Engine>{{ database.engine }}</Engine> <Engine>{{ database.engine }}</Engine>
<IAMDatabaseAuthenticationEnabled>{{database.iam_database_authentication_enabled }}</IAMDatabaseAuthenticationEnabled>
<LicenseModel>{{ database.license_model }}</LicenseModel> <LicenseModel>{{ database.license_model }}</LicenseModel>
<EngineVersion>{{ database.engine_version }}</EngineVersion> <EngineVersion>{{ database.engine_version }}</EngineVersion>
<OptionGroupMemberships> <OptionGroupMemberships>

View File

@ -123,7 +123,7 @@ class RDS2Response(BaseResponse):
start = all_ids.index(marker) + 1 start = all_ids.index(marker) + 1
else: else:
start = 0 start = 0
page_size = self._get_param('MaxRecords', 50) # the default is 100, but using 50 to make testing easier page_size = self._get_int_param('MaxRecords', 50) # the default is 100, but using 50 to make testing easier
instances_resp = all_instances[start:start + page_size] instances_resp = all_instances[start:start + page_size]
next_marker = None next_marker = None
if len(all_instances) > start + page_size: if len(all_instances) > start + page_size:

View File

@ -58,6 +58,21 @@ class InvalidSubnetError(RedshiftClientError):
"Subnet {0} not found.".format(subnet_identifier)) "Subnet {0} not found.".format(subnet_identifier))
class SnapshotCopyGrantAlreadyExistsFaultError(RedshiftClientError):
def __init__(self, snapshot_copy_grant_name):
super(SnapshotCopyGrantAlreadyExistsFaultError, self).__init__(
'SnapshotCopyGrantAlreadyExistsFault',
"Cannot create the snapshot copy grant because a grant "
"with the identifier '{0}' already exists".format(snapshot_copy_grant_name))
class SnapshotCopyGrantNotFoundFaultError(RedshiftClientError):
def __init__(self, snapshot_copy_grant_name):
super(SnapshotCopyGrantNotFoundFaultError, self).__init__(
'SnapshotCopyGrantNotFoundFault',
"Snapshot copy grant not found: {0}".format(snapshot_copy_grant_name))
class ClusterSnapshotNotFoundError(RedshiftClientError): class ClusterSnapshotNotFoundError(RedshiftClientError):
def __init__(self, snapshot_identifier): def __init__(self, snapshot_identifier):
super(ClusterSnapshotNotFoundError, self).__init__( super(ClusterSnapshotNotFoundError, self).__init__(
@ -93,3 +108,24 @@ class ResourceNotFoundFaultError(RedshiftClientError):
msg = message msg = message
super(ResourceNotFoundFaultError, self).__init__( super(ResourceNotFoundFaultError, self).__init__(
'ResourceNotFoundFault', msg) 'ResourceNotFoundFault', msg)
class SnapshotCopyDisabledFaultError(RedshiftClientError):
def __init__(self, cluster_identifier):
super(SnapshotCopyDisabledFaultError, self).__init__(
'SnapshotCopyDisabledFault',
"Cannot modify retention period because snapshot copy is disabled on Cluster {0}.".format(cluster_identifier))
class SnapshotCopyAlreadyDisabledFaultError(RedshiftClientError):
def __init__(self, cluster_identifier):
super(SnapshotCopyAlreadyDisabledFaultError, self).__init__(
'SnapshotCopyAlreadyDisabledFault',
"Snapshot Copy is already disabled on Cluster {0}.".format(cluster_identifier))
class SnapshotCopyAlreadyEnabledFaultError(RedshiftClientError):
def __init__(self, cluster_identifier):
super(SnapshotCopyAlreadyEnabledFaultError, self).__init__(
'SnapshotCopyAlreadyEnabledFault',
"Snapshot Copy is already enabled on Cluster {0}.".format(cluster_identifier))

View File

@ -4,6 +4,7 @@ import copy
import datetime import datetime
import boto.redshift import boto.redshift
from botocore.exceptions import ClientError
from moto.compat import OrderedDict from moto.compat import OrderedDict
from moto.core import BaseBackend, BaseModel from moto.core import BaseBackend, BaseModel
from moto.core.utils import iso_8601_datetime_with_milliseconds from moto.core.utils import iso_8601_datetime_with_milliseconds
@ -17,7 +18,12 @@ from .exceptions import (
ClusterSubnetGroupNotFoundError, ClusterSubnetGroupNotFoundError,
InvalidParameterValueError, InvalidParameterValueError,
InvalidSubnetError, InvalidSubnetError,
ResourceNotFoundFaultError ResourceNotFoundFaultError,
SnapshotCopyAlreadyDisabledFaultError,
SnapshotCopyAlreadyEnabledFaultError,
SnapshotCopyDisabledFaultError,
SnapshotCopyGrantAlreadyExistsFaultError,
SnapshotCopyGrantNotFoundFaultError,
) )
@ -67,7 +73,7 @@ class Cluster(TaggableResourceMixin, BaseModel):
preferred_maintenance_window, cluster_parameter_group_name, preferred_maintenance_window, cluster_parameter_group_name,
automated_snapshot_retention_period, port, cluster_version, automated_snapshot_retention_period, port, cluster_version,
allow_version_upgrade, number_of_nodes, publicly_accessible, allow_version_upgrade, number_of_nodes, publicly_accessible,
encrypted, region_name, tags=None): encrypted, region_name, tags=None, iam_roles_arn=None):
super(Cluster, self).__init__(region_name, tags) super(Cluster, self).__init__(region_name, tags)
self.redshift_backend = redshift_backend self.redshift_backend = redshift_backend
self.cluster_identifier = cluster_identifier self.cluster_identifier = cluster_identifier
@ -112,6 +118,8 @@ class Cluster(TaggableResourceMixin, BaseModel):
else: else:
self.number_of_nodes = 1 self.number_of_nodes = 1
self.iam_roles_arn = iam_roles_arn or []
@classmethod @classmethod
def create_from_cloudformation_json(cls, resource_name, cloudformation_json, region_name): def create_from_cloudformation_json(cls, resource_name, cloudformation_json, region_name):
redshift_backend = redshift_backends[region_name] redshift_backend = redshift_backends[region_name]
@ -194,7 +202,7 @@ class Cluster(TaggableResourceMixin, BaseModel):
return self.cluster_identifier return self.cluster_identifier
def to_json(self): def to_json(self):
return { json_response = {
"MasterUsername": self.master_username, "MasterUsername": self.master_username,
"MasterUserPassword": "****", "MasterUserPassword": "****",
"ClusterVersion": self.cluster_version, "ClusterVersion": self.cluster_version,
@ -228,7 +236,32 @@ class Cluster(TaggableResourceMixin, BaseModel):
"Port": self.port "Port": self.port
}, },
"PendingModifiedValues": [], "PendingModifiedValues": [],
"Tags": self.tags "Tags": self.tags,
"IamRoles": [{
"ApplyStatus": "in-sync",
"IamRoleArn": iam_role_arn
} for iam_role_arn in self.iam_roles_arn]
}
try:
json_response['ClusterSnapshotCopyStatus'] = self.cluster_snapshot_copy_status
except AttributeError:
pass
return json_response
class SnapshotCopyGrant(TaggableResourceMixin, BaseModel):
resource_type = 'snapshotcopygrant'
def __init__(self, snapshot_copy_grant_name, kms_key_id):
self.snapshot_copy_grant_name = snapshot_copy_grant_name
self.kms_key_id = kms_key_id
def to_json(self):
return {
"SnapshotCopyGrantName": self.snapshot_copy_grant_name,
"KmsKeyId": self.kms_key_id
} }
@ -351,7 +384,7 @@ class Snapshot(TaggableResourceMixin, BaseModel):
resource_type = 'snapshot' resource_type = 'snapshot'
def __init__(self, cluster, snapshot_identifier, region_name, tags=None): def __init__(self, cluster, snapshot_identifier, region_name, tags=None, iam_roles_arn=None):
super(Snapshot, self).__init__(region_name, tags) super(Snapshot, self).__init__(region_name, tags)
self.cluster = copy.copy(cluster) self.cluster = copy.copy(cluster)
self.snapshot_identifier = snapshot_identifier self.snapshot_identifier = snapshot_identifier
@ -359,6 +392,7 @@ class Snapshot(TaggableResourceMixin, BaseModel):
self.status = 'available' self.status = 'available'
self.create_time = iso_8601_datetime_with_milliseconds( self.create_time = iso_8601_datetime_with_milliseconds(
datetime.datetime.now()) datetime.datetime.now())
self.iam_roles_arn = iam_roles_arn or []
@property @property
def resource_id(self): def resource_id(self):
@ -380,7 +414,11 @@ class Snapshot(TaggableResourceMixin, BaseModel):
'NodeType': self.cluster.node_type, 'NodeType': self.cluster.node_type,
'NumberOfNodes': self.cluster.number_of_nodes, 'NumberOfNodes': self.cluster.number_of_nodes,
'DBName': self.cluster.db_name, 'DBName': self.cluster.db_name,
'Tags': self.tags 'Tags': self.tags,
"IamRoles": [{
"ApplyStatus": "in-sync",
"IamRoleArn": iam_role_arn
} for iam_role_arn in self.iam_roles_arn]
} }
@ -410,6 +448,7 @@ class RedshiftBackend(BaseBackend):
'snapshot': self.snapshots, 'snapshot': self.snapshots,
'subnetgroup': self.subnet_groups 'subnetgroup': self.subnet_groups
} }
self.snapshot_copy_grants = {}
def reset(self): def reset(self):
ec2_backend = self.ec2_backend ec2_backend = self.ec2_backend
@ -417,6 +456,43 @@ class RedshiftBackend(BaseBackend):
self.__dict__ = {} self.__dict__ = {}
self.__init__(ec2_backend, region_name) self.__init__(ec2_backend, region_name)
def enable_snapshot_copy(self, **kwargs):
cluster_identifier = kwargs['cluster_identifier']
cluster = self.clusters[cluster_identifier]
if not hasattr(cluster, 'cluster_snapshot_copy_status'):
if cluster.encrypted == 'true' and kwargs['snapshot_copy_grant_name'] is None:
raise ClientError(
'InvalidParameterValue',
'SnapshotCopyGrantName is required for Snapshot Copy '
'on KMS encrypted clusters.'
)
status = {
'DestinationRegion': kwargs['destination_region'],
'RetentionPeriod': kwargs['retention_period'],
'SnapshotCopyGrantName': kwargs['snapshot_copy_grant_name'],
}
cluster.cluster_snapshot_copy_status = status
return cluster
else:
raise SnapshotCopyAlreadyEnabledFaultError(cluster_identifier)
def disable_snapshot_copy(self, **kwargs):
cluster_identifier = kwargs['cluster_identifier']
cluster = self.clusters[cluster_identifier]
if hasattr(cluster, 'cluster_snapshot_copy_status'):
del cluster.cluster_snapshot_copy_status
return cluster
else:
raise SnapshotCopyAlreadyDisabledFaultError(cluster_identifier)
def modify_snapshot_copy_retention_period(self, cluster_identifier, retention_period):
cluster = self.clusters[cluster_identifier]
if hasattr(cluster, 'cluster_snapshot_copy_status'):
cluster.cluster_snapshot_copy_status['RetentionPeriod'] = retention_period
return cluster
else:
raise SnapshotCopyDisabledFaultError(cluster_identifier)
def create_cluster(self, **cluster_kwargs): def create_cluster(self, **cluster_kwargs):
cluster_identifier = cluster_kwargs['cluster_identifier'] cluster_identifier = cluster_kwargs['cluster_identifier']
cluster = Cluster(self, **cluster_kwargs) cluster = Cluster(self, **cluster_kwargs)
@ -568,6 +644,31 @@ class RedshiftBackend(BaseBackend):
create_kwargs.update(kwargs) create_kwargs.update(kwargs)
return self.create_cluster(**create_kwargs) return self.create_cluster(**create_kwargs)
def create_snapshot_copy_grant(self, **kwargs):
snapshot_copy_grant_name = kwargs['snapshot_copy_grant_name']
kms_key_id = kwargs['kms_key_id']
if snapshot_copy_grant_name not in self.snapshot_copy_grants:
snapshot_copy_grant = SnapshotCopyGrant(snapshot_copy_grant_name, kms_key_id)
self.snapshot_copy_grants[snapshot_copy_grant_name] = snapshot_copy_grant
return snapshot_copy_grant
raise SnapshotCopyGrantAlreadyExistsFaultError(snapshot_copy_grant_name)
def delete_snapshot_copy_grant(self, **kwargs):
snapshot_copy_grant_name = kwargs['snapshot_copy_grant_name']
if snapshot_copy_grant_name in self.snapshot_copy_grants:
return self.snapshot_copy_grants.pop(snapshot_copy_grant_name)
raise SnapshotCopyGrantNotFoundFaultError(snapshot_copy_grant_name)
def describe_snapshot_copy_grants(self, **kwargs):
copy_grants = self.snapshot_copy_grants.values()
snapshot_copy_grant_name = kwargs['snapshot_copy_grant_name']
if snapshot_copy_grant_name:
if snapshot_copy_grant_name in self.snapshot_copy_grants:
return [self.snapshot_copy_grants[snapshot_copy_grant_name]]
else:
raise SnapshotCopyGrantNotFoundFaultError(snapshot_copy_grant_name)
return copy_grants
def _get_resource_from_arn(self, arn): def _get_resource_from_arn(self, arn):
try: try:
arn_breakdown = arn.split(':') arn_breakdown = arn.split(':')

View File

@ -99,6 +99,12 @@ class RedshiftResponse(BaseResponse):
vpc_security_group_ids = self._get_multi_param('VpcSecurityGroupIds.VpcSecurityGroupId') vpc_security_group_ids = self._get_multi_param('VpcSecurityGroupIds.VpcSecurityGroupId')
return vpc_security_group_ids return vpc_security_group_ids
def _get_iam_roles(self):
iam_roles = self._get_multi_param('IamRoles.member')
if not iam_roles:
iam_roles = self._get_multi_param('IamRoles.IamRoleArn')
return iam_roles
def _get_subnet_ids(self): def _get_subnet_ids(self):
subnet_ids = self._get_multi_param('SubnetIds.member') subnet_ids = self._get_multi_param('SubnetIds.member')
if not subnet_ids: if not subnet_ids:
@ -127,7 +133,8 @@ class RedshiftResponse(BaseResponse):
"publicly_accessible": self._get_param("PubliclyAccessible"), "publicly_accessible": self._get_param("PubliclyAccessible"),
"encrypted": self._get_param("Encrypted"), "encrypted": self._get_param("Encrypted"),
"region_name": self.region, "region_name": self.region,
"tags": self.unpack_complex_list_params('Tags.Tag', ('Key', 'Value')) "tags": self.unpack_complex_list_params('Tags.Tag', ('Key', 'Value')),
"iam_roles_arn": self._get_iam_roles(),
} }
cluster = self.redshift_backend.create_cluster(**cluster_kwargs).to_json() cluster = self.redshift_backend.create_cluster(**cluster_kwargs).to_json()
cluster['ClusterStatus'] = 'creating' cluster['ClusterStatus'] = 'creating'
@ -162,6 +169,7 @@ class RedshiftResponse(BaseResponse):
"automated_snapshot_retention_period": self._get_int_param( "automated_snapshot_retention_period": self._get_int_param(
'AutomatedSnapshotRetentionPeriod'), 'AutomatedSnapshotRetentionPeriod'),
"region_name": self.region, "region_name": self.region,
"iam_roles_arn": self._get_iam_roles(),
} }
cluster = self.redshift_backend.restore_from_cluster_snapshot(**restore_kwargs).to_json() cluster = self.redshift_backend.restore_from_cluster_snapshot(**restore_kwargs).to_json()
cluster['ClusterStatus'] = 'creating' cluster['ClusterStatus'] = 'creating'
@ -209,6 +217,7 @@ class RedshiftResponse(BaseResponse):
"number_of_nodes": self._get_int_param('NumberOfNodes'), "number_of_nodes": self._get_int_param('NumberOfNodes'),
"publicly_accessible": self._get_param("PubliclyAccessible"), "publicly_accessible": self._get_param("PubliclyAccessible"),
"encrypted": self._get_param("Encrypted"), "encrypted": self._get_param("Encrypted"),
"iam_roles_arn": self._get_iam_roles(),
} }
cluster_kwargs = {} cluster_kwargs = {}
# We only want parameters that were actually passed in, otherwise # We only want parameters that were actually passed in, otherwise
@ -457,6 +466,55 @@ class RedshiftResponse(BaseResponse):
} }
}) })
def create_snapshot_copy_grant(self):
copy_grant_kwargs = {
'snapshot_copy_grant_name': self._get_param('SnapshotCopyGrantName'),
'kms_key_id': self._get_param('KmsKeyId'),
'region_name': self._get_param('Region'),
}
copy_grant = self.redshift_backend.create_snapshot_copy_grant(**copy_grant_kwargs)
return self.get_response({
"CreateSnapshotCopyGrantResponse": {
"CreateSnapshotCopyGrantResult": {
"SnapshotCopyGrant": copy_grant.to_json()
},
"ResponseMetadata": {
"RequestId": "384ac68d-3775-11df-8963-01868b7c937a",
}
}
})
def delete_snapshot_copy_grant(self):
copy_grant_kwargs = {
'snapshot_copy_grant_name': self._get_param('SnapshotCopyGrantName'),
}
self.redshift_backend.delete_snapshot_copy_grant(**copy_grant_kwargs)
return self.get_response({
"DeleteSnapshotCopyGrantResponse": {
"ResponseMetadata": {
"RequestId": "384ac68d-3775-11df-8963-01868b7c937a",
}
}
})
def describe_snapshot_copy_grants(self):
copy_grant_kwargs = {
'snapshot_copy_grant_name': self._get_param('SnapshotCopyGrantName'),
}
copy_grants = self.redshift_backend.describe_snapshot_copy_grants(**copy_grant_kwargs)
return self.get_response({
"DescribeSnapshotCopyGrantsResponse": {
"DescribeSnapshotCopyGrantsResult": {
"SnapshotCopyGrants": [copy_grant.to_json() for copy_grant in copy_grants]
},
"ResponseMetadata": {
"RequestId": "384ac68d-3775-11df-8963-01868b7c937a",
}
}
})
def create_tags(self): def create_tags(self):
resource_name = self._get_param('ResourceName') resource_name = self._get_param('ResourceName')
tags = self.unpack_complex_list_params('Tags.Tag', ('Key', 'Value')) tags = self.unpack_complex_list_params('Tags.Tag', ('Key', 'Value'))
@ -501,3 +559,58 @@ class RedshiftResponse(BaseResponse):
} }
} }
}) })
def enable_snapshot_copy(self):
snapshot_copy_kwargs = {
'cluster_identifier': self._get_param('ClusterIdentifier'),
'destination_region': self._get_param('DestinationRegion'),
'retention_period': self._get_param('RetentionPeriod', 7),
'snapshot_copy_grant_name': self._get_param('SnapshotCopyGrantName'),
}
cluster = self.redshift_backend.enable_snapshot_copy(**snapshot_copy_kwargs)
return self.get_response({
"EnableSnapshotCopyResponse": {
"EnableSnapshotCopyResult": {
"Cluster": cluster.to_json()
},
"ResponseMetadata": {
"RequestId": "384ac68d-3775-11df-8963-01868b7c937a",
}
}
})
def disable_snapshot_copy(self):
snapshot_copy_kwargs = {
'cluster_identifier': self._get_param('ClusterIdentifier'),
}
cluster = self.redshift_backend.disable_snapshot_copy(**snapshot_copy_kwargs)
return self.get_response({
"DisableSnapshotCopyResponse": {
"DisableSnapshotCopyResult": {
"Cluster": cluster.to_json()
},
"ResponseMetadata": {
"RequestId": "384ac68d-3775-11df-8963-01868b7c937a",
}
}
})
def modify_snapshot_copy_retention_period(self):
snapshot_copy_kwargs = {
'cluster_identifier': self._get_param('ClusterIdentifier'),
'retention_period': self._get_param('RetentionPeriod'),
}
cluster = self.redshift_backend.modify_snapshot_copy_retention_period(**snapshot_copy_kwargs)
return self.get_response({
"ModifySnapshotCopyRetentionPeriodResponse": {
"ModifySnapshotCopyRetentionPeriodResult": {
"Clusters": [cluster.to_json()]
},
"ResponseMetadata": {
"RequestId": "384ac68d-3775-11df-8963-01868b7c937a",
}
}
})

View File

@ -119,15 +119,17 @@ class ResourceGroupsTaggingAPIBackend(BaseBackend):
def tag_filter(tag_list): def tag_filter(tag_list):
result = [] result = []
if tag_filters:
for tag in tag_list:
temp_result = []
for f in filters:
f_result = f(tag['Key'], tag['Value'])
temp_result.append(f_result)
result.append(all(temp_result))
for tag in tag_list: return any(result)
temp_result = [] else:
for f in filters: return True
f_result = f(tag['Key'], tag['Value'])
temp_result.append(f_result)
result.append(all(temp_result))
return any(result)
# Do S3, resource type s3 # Do S3, resource type s3
if not resource_type_filters or 's3' in resource_type_filters: if not resource_type_filters or 's3' in resource_type_filters:
@ -210,6 +212,23 @@ class ResourceGroupsTaggingAPIBackend(BaseBackend):
# TODO add these to the keys and values functions / combine functions # TODO add these to the keys and values functions / combine functions
# ELB # ELB
def get_elbv2_tags(arn):
result = []
for key, value in self.elbv2_backend.load_balancers[elb.arn].tags.items():
result.append({'Key': key, 'Value': value})
return result
if not resource_type_filters or 'elasticloadbalancer' in resource_type_filters or 'elasticloadbalancer:loadbalancer' in resource_type_filters:
for elb in self.elbv2_backend.load_balancers.values():
tags = get_elbv2_tags(elb.arn)
# if 'elasticloadbalancer:loadbalancer' in resource_type_filters:
# from IPython import embed
# embed()
if not tag_filter(tags): # Skip if no tags, or invalid filter
continue
yield {'ResourceARN': '{0}'.format(elb.arn), 'Tags': tags}
# EMR Cluster # EMR Cluster
# Glacier Vault # Glacier Vault

View File

@ -140,7 +140,9 @@ class RecordSet(BaseModel):
{% if record_set.region %} {% if record_set.region %}
<Region>{{ record_set.region }}</Region> <Region>{{ record_set.region }}</Region>
{% endif %} {% endif %}
<TTL>{{ record_set.ttl }}</TTL> {% if record_set.ttl %}
<TTL>{{ record_set.ttl }}</TTL>
{% endif %}
<ResourceRecords> <ResourceRecords>
{% for record in record_set.records %} {% for record in record_set.records %}
<ResourceRecord> <ResourceRecord>

View File

@ -150,7 +150,7 @@ class Route53(BaseResponse):
elif method == "GET": elif method == "GET":
querystring = parse_qs(parsed_url.query) querystring = parse_qs(parsed_url.query)
template = Template(LIST_RRSET_REPONSE) template = Template(LIST_RRSET_RESPONSE)
start_type = querystring.get("type", [None])[0] start_type = querystring.get("type", [None])[0]
start_name = querystring.get("name", [None])[0] start_name = querystring.get("name", [None])[0]
record_sets = the_zone.get_record_sets(start_type, start_name) record_sets = the_zone.get_record_sets(start_type, start_name)
@ -182,9 +182,9 @@ class Route53(BaseResponse):
elif method == "DELETE": elif method == "DELETE":
health_check_id = parsed_url.path.split("/")[-1] health_check_id = parsed_url.path.split("/")[-1]
route53_backend.delete_health_check(health_check_id) route53_backend.delete_health_check(health_check_id)
return 200, headers, DELETE_HEALTH_CHECK_REPONSE return 200, headers, DELETE_HEALTH_CHECK_RESPONSE
elif method == "GET": elif method == "GET":
template = Template(LIST_HEALTH_CHECKS_REPONSE) template = Template(LIST_HEALTH_CHECKS_RESPONSE)
health_checks = route53_backend.get_health_checks() health_checks = route53_backend.get_health_checks()
return 200, headers, template.render(health_checks=health_checks) return 200, headers, template.render(health_checks=health_checks)
@ -248,7 +248,7 @@ CHANGE_TAGS_FOR_RESOURCE_RESPONSE = """<ChangeTagsForResourceResponse xmlns="htt
</ChangeTagsForResourceResponse> </ChangeTagsForResourceResponse>
""" """
LIST_RRSET_REPONSE = """<ListResourceRecordSetsResponse xmlns="https://route53.amazonaws.com/doc/2012-12-12/"> LIST_RRSET_RESPONSE = """<ListResourceRecordSetsResponse xmlns="https://route53.amazonaws.com/doc/2012-12-12/">
<ResourceRecordSets> <ResourceRecordSets>
{% for record_set in record_sets %} {% for record_set in record_sets %}
{{ record_set.to_xml() }} {{ record_set.to_xml() }}
@ -350,7 +350,7 @@ CREATE_HEALTH_CHECK_RESPONSE = """<?xml version="1.0" encoding="UTF-8"?>
{{ health_check.to_xml() }} {{ health_check.to_xml() }}
</CreateHealthCheckResponse>""" </CreateHealthCheckResponse>"""
LIST_HEALTH_CHECKS_REPONSE = """<?xml version="1.0" encoding="UTF-8"?> LIST_HEALTH_CHECKS_RESPONSE = """<?xml version="1.0" encoding="UTF-8"?>
<ListHealthChecksResponse xmlns="https://route53.amazonaws.com/doc/2013-04-01/"> <ListHealthChecksResponse xmlns="https://route53.amazonaws.com/doc/2013-04-01/">
<HealthChecks> <HealthChecks>
{% for health_check in health_checks %} {% for health_check in health_checks %}
@ -361,6 +361,6 @@ LIST_HEALTH_CHECKS_REPONSE = """<?xml version="1.0" encoding="UTF-8"?>
<MaxItems>{{ health_checks|length }}</MaxItems> <MaxItems>{{ health_checks|length }}</MaxItems>
</ListHealthChecksResponse>""" </ListHealthChecksResponse>"""
DELETE_HEALTH_CHECK_REPONSE = """<?xml version="1.0" encoding="UTF-8"?> DELETE_HEALTH_CHECK_RESPONSE = """<?xml version="1.0" encoding="UTF-8"?>
<DeleteHealthCheckResponse xmlns="https://route53.amazonaws.com/doc/2013-04-01/"> <DeleteHealthCheckResponse xmlns="https://route53.amazonaws.com/doc/2013-04-01/">
</DeleteHealthCheckResponse>""" </DeleteHealthCheckResponse>"""

View File

@ -111,3 +111,60 @@ class MalformedXML(S3ClientError):
"MalformedXML", "MalformedXML",
"The XML you provided was not well-formed or did not validate against our published schema", "The XML you provided was not well-formed or did not validate against our published schema",
*args, **kwargs) *args, **kwargs)
class MalformedACLError(S3ClientError):
code = 400
def __init__(self, *args, **kwargs):
super(MalformedACLError, self).__init__(
"MalformedACLError",
"The XML you provided was not well-formed or did not validate against our published schema",
*args, **kwargs)
class InvalidTargetBucketForLogging(S3ClientError):
code = 400
def __init__(self, msg):
super(InvalidTargetBucketForLogging, self).__init__("InvalidTargetBucketForLogging", msg)
class CrossLocationLoggingProhibitted(S3ClientError):
code = 403
def __init__(self):
super(CrossLocationLoggingProhibitted, self).__init__(
"CrossLocationLoggingProhibitted",
"Cross S3 location logging not allowed."
)
class InvalidNotificationARN(S3ClientError):
code = 400
def __init__(self, *args, **kwargs):
super(InvalidNotificationARN, self).__init__(
"InvalidArgument",
"The ARN is not well formed",
*args, **kwargs)
class InvalidNotificationDestination(S3ClientError):
code = 400
def __init__(self, *args, **kwargs):
super(InvalidNotificationDestination, self).__init__(
"InvalidArgument",
"The notification destination service region is not valid for the bucket location constraint",
*args, **kwargs)
class InvalidNotificationEvent(S3ClientError):
code = 400
def __init__(self, *args, **kwargs):
super(InvalidNotificationEvent, self).__init__(
"InvalidArgument",
"The event is not supported for notifications",
*args, **kwargs)

View File

@ -6,12 +6,16 @@ import hashlib
import copy import copy
import itertools import itertools
import codecs import codecs
import random
import string
import six import six
from bisect import insort from bisect import insort
from moto.core import BaseBackend, BaseModel from moto.core import BaseBackend, BaseModel
from moto.core.utils import iso_8601_datetime_with_milliseconds, rfc_1123_datetime from moto.core.utils import iso_8601_datetime_with_milliseconds, rfc_1123_datetime
from .exceptions import BucketAlreadyExists, MissingBucket, InvalidPart, EntityTooSmall, MissingKey from .exceptions import BucketAlreadyExists, MissingBucket, InvalidPart, EntityTooSmall, MissingKey, \
InvalidNotificationDestination, MalformedXML
from .utils import clean_key_name, _VersionedKeyStore from .utils import clean_key_name, _VersionedKeyStore
UPLOAD_ID_BYTES = 43 UPLOAD_ID_BYTES = 43
@ -270,7 +274,7 @@ def get_canned_acl(acl):
grants.append(FakeGrant([ALL_USERS_GRANTEE], [PERMISSION_READ])) grants.append(FakeGrant([ALL_USERS_GRANTEE], [PERMISSION_READ]))
elif acl == 'public-read-write': elif acl == 'public-read-write':
grants.append(FakeGrant([ALL_USERS_GRANTEE], [ grants.append(FakeGrant([ALL_USERS_GRANTEE], [
PERMISSION_READ, PERMISSION_WRITE])) PERMISSION_READ, PERMISSION_WRITE]))
elif acl == 'authenticated-read': elif acl == 'authenticated-read':
grants.append( grants.append(
FakeGrant([AUTHENTICATED_USERS_GRANTEE], [PERMISSION_READ])) FakeGrant([AUTHENTICATED_USERS_GRANTEE], [PERMISSION_READ]))
@ -282,7 +286,7 @@ def get_canned_acl(acl):
pass # TODO: bucket owner, EC2 Read pass # TODO: bucket owner, EC2 Read
elif acl == 'log-delivery-write': elif acl == 'log-delivery-write':
grants.append(FakeGrant([LOG_DELIVERY_GRANTEE], [ grants.append(FakeGrant([LOG_DELIVERY_GRANTEE], [
PERMISSION_READ_ACP, PERMISSION_WRITE])) PERMISSION_READ_ACP, PERMISSION_WRITE]))
else: else:
assert False, 'Unknown canned acl: %s' % (acl,) assert False, 'Unknown canned acl: %s' % (acl,)
return FakeAcl(grants=grants) return FakeAcl(grants=grants)
@ -307,18 +311,35 @@ class FakeTag(BaseModel):
self.value = value self.value = value
class LifecycleFilter(BaseModel):
def __init__(self, prefix=None, tag=None, and_filter=None):
self.prefix = prefix or ''
self.tag = tag
self.and_filter = and_filter
class LifecycleAndFilter(BaseModel):
def __init__(self, prefix=None, tags=None):
self.prefix = prefix or ''
self.tags = tags
class LifecycleRule(BaseModel): class LifecycleRule(BaseModel):
def __init__(self, id=None, prefix=None, status=None, expiration_days=None, def __init__(self, id=None, prefix=None, lc_filter=None, status=None, expiration_days=None,
expiration_date=None, transition_days=None, expiration_date=None, transition_days=None, expired_object_delete_marker=None,
transition_date=None, storage_class=None): transition_date=None, storage_class=None):
self.id = id self.id = id
self.prefix = prefix self.prefix = prefix
self.filter = lc_filter
self.status = status self.status = status
self.expiration_days = expiration_days self.expiration_days = expiration_days
self.expiration_date = expiration_date self.expiration_date = expiration_date
self.transition_days = transition_days self.transition_days = transition_days
self.transition_date = transition_date self.transition_date = transition_date
self.expired_object_delete_marker = expired_object_delete_marker
self.storage_class = storage_class self.storage_class = storage_class
@ -333,6 +354,26 @@ class CorsRule(BaseModel):
self.max_age_seconds = max_age_seconds self.max_age_seconds = max_age_seconds
class Notification(BaseModel):
def __init__(self, arn, events, filters=None, id=None):
self.id = id if id else ''.join(random.choice(string.ascii_letters + string.digits) for _ in range(50))
self.arn = arn
self.events = events
self.filters = filters if filters else {}
class NotificationConfiguration(BaseModel):
def __init__(self, topic=None, queue=None, cloud_function=None):
self.topic = [Notification(t["Topic"], t["Event"], filters=t.get("Filter"), id=t.get("Id")) for t in topic] \
if topic else []
self.queue = [Notification(q["Queue"], q["Event"], filters=q.get("Filter"), id=q.get("Id")) for q in queue] \
if queue else []
self.cloud_function = [Notification(c["CloudFunction"], c["Event"], filters=c.get("Filter"), id=c.get("Id"))
for c in cloud_function] if cloud_function else []
class FakeBucket(BaseModel): class FakeBucket(BaseModel):
def __init__(self, name, region_name): def __init__(self, name, region_name):
@ -347,6 +388,8 @@ class FakeBucket(BaseModel):
self.acl = get_canned_acl('private') self.acl = get_canned_acl('private')
self.tags = FakeTagging() self.tags = FakeTagging()
self.cors = [] self.cors = []
self.logging = {}
self.notification_configuration = None
@property @property
def location(self): def location(self):
@ -361,12 +404,50 @@ class FakeBucket(BaseModel):
for rule in rules: for rule in rules:
expiration = rule.get('Expiration') expiration = rule.get('Expiration')
transition = rule.get('Transition') transition = rule.get('Transition')
eodm = None
if expiration and expiration.get("ExpiredObjectDeleteMarker") is not None:
# This cannot be set if Date or Days is set:
if expiration.get("Days") or expiration.get("Date"):
raise MalformedXML()
eodm = expiration["ExpiredObjectDeleteMarker"]
# Pull out the filter:
lc_filter = None
if rule.get("Filter"):
# Can't have both `Filter` and `Prefix` (need to check for the presence of the key):
try:
if rule["Prefix"] or not rule["Prefix"]:
raise MalformedXML()
except KeyError:
pass
and_filter = None
if rule["Filter"].get("And"):
and_tags = []
if rule["Filter"]["And"].get("Tag"):
if not isinstance(rule["Filter"]["And"]["Tag"], list):
rule["Filter"]["And"]["Tag"] = [rule["Filter"]["And"]["Tag"]]
for t in rule["Filter"]["And"]["Tag"]:
and_tags.append(FakeTag(t["Key"], t.get("Value", '')))
and_filter = LifecycleAndFilter(prefix=rule["Filter"]["And"]["Prefix"], tags=and_tags)
filter_tag = None
if rule["Filter"].get("Tag"):
filter_tag = FakeTag(rule["Filter"]["Tag"]["Key"], rule["Filter"]["Tag"].get("Value", ''))
lc_filter = LifecycleFilter(prefix=rule["Filter"]["Prefix"], tag=filter_tag, and_filter=and_filter)
self.rules.append(LifecycleRule( self.rules.append(LifecycleRule(
id=rule.get('ID'), id=rule.get('ID'),
prefix=rule.get('Prefix'), prefix=rule.get('Prefix'),
lc_filter=lc_filter,
status=rule['Status'], status=rule['Status'],
expiration_days=expiration.get('Days') if expiration else None, expiration_days=expiration.get('Days') if expiration else None,
expiration_date=expiration.get('Date') if expiration else None, expiration_date=expiration.get('Date') if expiration else None,
expired_object_delete_marker=eodm,
transition_days=transition.get('Days') if transition else None, transition_days=transition.get('Days') if transition else None,
transition_date=transition.get('Date') if transition else None, transition_date=transition.get('Date') if transition else None,
storage_class=transition[ storage_class=transition[
@ -422,6 +503,59 @@ class FakeBucket(BaseModel):
def tagging(self): def tagging(self):
return self.tags return self.tags
def set_logging(self, logging_config, bucket_backend):
if not logging_config:
self.logging = {}
return
from moto.s3.exceptions import InvalidTargetBucketForLogging, CrossLocationLoggingProhibitted
# Target bucket must exist in the same account (assuming all moto buckets are in the same account):
if not bucket_backend.buckets.get(logging_config["TargetBucket"]):
raise InvalidTargetBucketForLogging("The target bucket for logging does not exist.")
# Does the target bucket have the log-delivery WRITE and READ_ACP permissions?
write = read_acp = False
for grant in bucket_backend.buckets[logging_config["TargetBucket"]].acl.grants:
# Must be granted to: http://acs.amazonaws.com/groups/s3/LogDelivery
for grantee in grant.grantees:
if grantee.uri == "http://acs.amazonaws.com/groups/s3/LogDelivery":
if "WRITE" in grant.permissions or "FULL_CONTROL" in grant.permissions:
write = True
if "READ_ACP" in grant.permissions or "FULL_CONTROL" in grant.permissions:
read_acp = True
break
if not write or not read_acp:
raise InvalidTargetBucketForLogging("You must give the log-delivery group WRITE and READ_ACP"
" permissions to the target bucket")
# Buckets must also exist within the same region:
if bucket_backend.buckets[logging_config["TargetBucket"]].region_name != self.region_name:
raise CrossLocationLoggingProhibitted()
# Checks pass -- set the logging config:
self.logging = logging_config
def set_notification_configuration(self, notification_config):
if not notification_config:
self.notification_configuration = None
return
self.notification_configuration = NotificationConfiguration(
topic=notification_config.get("TopicConfiguration"),
queue=notification_config.get("QueueConfiguration"),
cloud_function=notification_config.get("CloudFunctionConfiguration")
)
# Validate that the region is correct:
for thing in ["topic", "queue", "cloud_function"]:
for t in getattr(self.notification_configuration, thing):
region = t.arn.split(":")[3]
if region != self.region_name:
raise InvalidNotificationDestination()
def set_website_configuration(self, website_configuration): def set_website_configuration(self, website_configuration):
self.website_configuration = website_configuration self.website_configuration = website_configuration
@ -608,10 +742,18 @@ class S3Backend(BaseBackend):
bucket = self.get_bucket(bucket_name) bucket = self.get_bucket(bucket_name)
bucket.set_cors(cors_rules) bucket.set_cors(cors_rules)
def put_bucket_logging(self, bucket_name, logging_config):
bucket = self.get_bucket(bucket_name)
bucket.set_logging(logging_config, self)
def delete_bucket_cors(self, bucket_name): def delete_bucket_cors(self, bucket_name):
bucket = self.get_bucket(bucket_name) bucket = self.get_bucket(bucket_name)
bucket.delete_cors() bucket.delete_cors()
def put_bucket_notification_configuration(self, bucket_name, notification_config):
bucket = self.get_bucket(bucket_name)
bucket.set_notification_configuration(notification_config)
def initiate_multipart(self, bucket_name, key_name, metadata): def initiate_multipart(self, bucket_name, key_name, metadata):
bucket = self.get_bucket(bucket_name) bucket = self.get_bucket(bucket_name)
new_multipart = FakeMultipart(key_name, metadata) new_multipart = FakeMultipart(key_name, metadata)
@ -683,6 +825,7 @@ class S3Backend(BaseBackend):
else: else:
key_results.add(key) key_results.add(key)
key_results = filter(lambda key: not isinstance(key, FakeDeleteMarker), key_results)
key_results = sorted(key_results, key=lambda key: key.name) key_results = sorted(key_results, key=lambda key: key.name)
folder_results = [folder_name for folder_name in sorted( folder_results = [folder_name for folder_name in sorted(
folder_results, key=lambda key: key)] folder_results, key=lambda key: key)]

View File

@ -4,22 +4,24 @@ import re
import six import six
from moto.core.utils import str_to_rfc_1123_datetime from moto.core.utils import str_to_rfc_1123_datetime
from six.moves.urllib.parse import parse_qs, urlparse from six.moves.urllib.parse import parse_qs, urlparse, unquote
import xmltodict import xmltodict
from moto.packages.httpretty.core import HTTPrettyRequest from moto.packages.httpretty.core import HTTPrettyRequest
from moto.core.responses import _TemplateEnvironmentMixin from moto.core.responses import _TemplateEnvironmentMixin
from moto.s3bucket_path.utils import bucket_name_from_url as bucketpath_bucket_name_from_url, parse_key_name as bucketpath_parse_key_name, is_delete_keys as bucketpath_is_delete_keys from moto.s3bucket_path.utils import bucket_name_from_url as bucketpath_bucket_name_from_url, \
parse_key_name as bucketpath_parse_key_name, is_delete_keys as bucketpath_is_delete_keys
from .exceptions import BucketAlreadyExists, S3ClientError, MissingBucket, MissingKey, InvalidPartOrder, MalformedXML, \
from .exceptions import BucketAlreadyExists, S3ClientError, MissingBucket, MissingKey, InvalidPartOrder MalformedACLError, InvalidNotificationARN, InvalidNotificationEvent
from .models import s3_backend, get_canned_acl, FakeGrantee, FakeGrant, FakeAcl, FakeKey, FakeTagging, FakeTagSet, FakeTag from .models import s3_backend, get_canned_acl, FakeGrantee, FakeGrant, FakeAcl, FakeKey, FakeTagging, FakeTagSet, \
from .utils import bucket_name_from_url, metadata_from_headers FakeTag
from .utils import bucket_name_from_url, metadata_from_headers, parse_region_from_url
from xml.dom import minidom from xml.dom import minidom
REGION_URL_REGEX = r'\.s3-(.+?)\.amazonaws\.com'
DEFAULT_REGION_NAME = 'us-east-1' DEFAULT_REGION_NAME = 'us-east-1'
@ -55,10 +57,11 @@ class ResponseObject(_TemplateEnvironmentMixin):
if not host: if not host:
host = urlparse(request.url).netloc host = urlparse(request.url).netloc
if (not host or host.startswith('localhost') or if (not host or host.startswith('localhost') or host.startswith('localstack') or
re.match(r'^[^.]+$', host) or re.match(r'^.*\.svc\.cluster\.local$', host)): re.match(r'^[^.]+$', host) or re.match(r'^.*\.svc\.cluster\.local$', host)):
# Default to path-based buckets for (1) localhost, (2) local host names that do not # Default to path-based buckets for (1) localhost, (2) localstack hosts (e.g. localstack.dev),
# contain a "." (e.g., Docker container host names), or (3) kubernetes host names # (3) local host names that do not contain a "." (e.g., Docker container host names), or
# (4) kubernetes host names
return False return False
match = re.match(r'^([^\[\]:]+)(:\d+)?$', host) match = re.match(r'^([^\[\]:]+)(:\d+)?$', host)
@ -70,8 +73,9 @@ class ResponseObject(_TemplateEnvironmentMixin):
match = re.match(r'^\[(.+)\](:\d+)?$', host) match = re.match(r'^\[(.+)\](:\d+)?$', host)
if match: if match:
match = re.match(r'^(((?=.*(::))(?!.*\3.+\3))\3?|[\dA-F]{1,4}:)([\dA-F]{1,4}(\3|:\b)|\2){5}(([\dA-F]{1,4}(\3|:\b|$)|\2){2}|(((2[0-4]|1\d|[1-9])?\d|25[0-5])\.?\b){4})\Z', match = re.match(
match.groups()[0], re.IGNORECASE) r'^(((?=.*(::))(?!.*\3.+\3))\3?|[\dA-F]{1,4}:)([\dA-F]{1,4}(\3|:\b)|\2){5}(([\dA-F]{1,4}(\3|:\b|$)|\2){2}|(((2[0-4]|1\d|[1-9])?\d|25[0-5])\.?\b){4})\Z',
match.groups()[0], re.IGNORECASE)
if match: if match:
return False return False
@ -125,10 +129,7 @@ class ResponseObject(_TemplateEnvironmentMixin):
parsed_url = urlparse(full_url) parsed_url = urlparse(full_url)
querystring = parse_qs(parsed_url.query, keep_blank_values=True) querystring = parse_qs(parsed_url.query, keep_blank_values=True)
method = request.method method = request.method
region_name = DEFAULT_REGION_NAME region_name = parse_region_from_url(full_url)
region_match = re.search(REGION_URL_REGEX, full_url)
if region_match:
region_name = region_match.groups()[0]
bucket_name = self.parse_bucket_name_from_url(request, full_url) bucket_name = self.parse_bucket_name_from_url(request, full_url)
if not bucket_name: if not bucket_name:
@ -169,7 +170,7 @@ class ResponseObject(_TemplateEnvironmentMixin):
# HEAD (which the real API responds with), and instead # HEAD (which the real API responds with), and instead
# raises NoSuchBucket, leading to inconsistency in # raises NoSuchBucket, leading to inconsistency in
# error response between real and mocked responses. # error response between real and mocked responses.
return 404, {}, "Not Found" return 404, {}, ""
return 200, {}, "" return 200, {}, ""
def _bucket_response_get(self, bucket_name, querystring, headers): def _bucket_response_get(self, bucket_name, querystring, headers):
@ -229,6 +230,13 @@ class ResponseObject(_TemplateEnvironmentMixin):
return 404, {}, template.render(bucket_name=bucket_name) return 404, {}, template.render(bucket_name=bucket_name)
template = self.response_template(S3_BUCKET_TAGGING_RESPONSE) template = self.response_template(S3_BUCKET_TAGGING_RESPONSE)
return template.render(bucket=bucket) return template.render(bucket=bucket)
elif 'logging' in querystring:
bucket = self.backend.get_bucket(bucket_name)
if not bucket.logging:
template = self.response_template(S3_NO_LOGGING_CONFIG)
return 200, {}, template.render()
template = self.response_template(S3_LOGGING_CONFIG)
return 200, {}, template.render(logging=bucket.logging)
elif "cors" in querystring: elif "cors" in querystring:
bucket = self.backend.get_bucket(bucket_name) bucket = self.backend.get_bucket(bucket_name)
if len(bucket.cors) == 0: if len(bucket.cors) == 0:
@ -236,6 +244,13 @@ class ResponseObject(_TemplateEnvironmentMixin):
return 404, {}, template.render(bucket_name=bucket_name) return 404, {}, template.render(bucket_name=bucket_name)
template = self.response_template(S3_BUCKET_CORS_RESPONSE) template = self.response_template(S3_BUCKET_CORS_RESPONSE)
return template.render(bucket=bucket) return template.render(bucket=bucket)
elif "notification" in querystring:
bucket = self.backend.get_bucket(bucket_name)
if not bucket.notification_configuration:
return 200, {}, ""
template = self.response_template(S3_GET_BUCKET_NOTIFICATION_CONFIG)
return template.render(bucket=bucket)
elif 'versions' in querystring: elif 'versions' in querystring:
delimiter = querystring.get('delimiter', [None])[0] delimiter = querystring.get('delimiter', [None])[0]
encoding_type = querystring.get('encoding-type', [None])[0] encoding_type = querystring.get('encoding-type', [None])[0]
@ -324,8 +339,7 @@ class ResponseObject(_TemplateEnvironmentMixin):
limit = continuation_token or start_after limit = continuation_token or start_after
result_keys = self._get_results_from_token(result_keys, limit) result_keys = self._get_results_from_token(result_keys, limit)
result_keys, is_truncated, \ result_keys, is_truncated, next_continuation_token = self._truncate_result(result_keys, max_keys)
next_continuation_token = self._truncate_result(result_keys, max_keys)
return template.render( return template.render(
bucket=bucket, bucket=bucket,
@ -380,8 +394,11 @@ class ResponseObject(_TemplateEnvironmentMixin):
self.backend.set_bucket_policy(bucket_name, body) self.backend.set_bucket_policy(bucket_name, body)
return 'True' return 'True'
elif 'acl' in querystring: elif 'acl' in querystring:
# TODO: Support the XML-based ACL format # Headers are first. If not set, then look at the body (consistent with the documentation):
self.backend.set_bucket_acl(bucket_name, self._acl_from_headers(request.headers)) acls = self._acl_from_headers(request.headers)
if not acls:
acls = self._acl_from_xml(body)
self.backend.set_bucket_acl(bucket_name, acls)
return "" return ""
elif "tagging" in querystring: elif "tagging" in querystring:
tagging = self._bucket_tagging_from_xml(body) tagging = self._bucket_tagging_from_xml(body)
@ -391,12 +408,27 @@ class ResponseObject(_TemplateEnvironmentMixin):
self.backend.set_bucket_website_configuration(bucket_name, body) self.backend.set_bucket_website_configuration(bucket_name, body)
return "" return ""
elif "cors" in querystring: elif "cors" in querystring:
from moto.s3.exceptions import MalformedXML
try: try:
self.backend.put_bucket_cors(bucket_name, self._cors_from_xml(body)) self.backend.put_bucket_cors(bucket_name, self._cors_from_xml(body))
return "" return ""
except KeyError: except KeyError:
raise MalformedXML() raise MalformedXML()
elif "logging" in querystring:
try:
self.backend.put_bucket_logging(bucket_name, self._logging_from_xml(body))
return ""
except KeyError:
raise MalformedXML()
elif "notification" in querystring:
try:
self.backend.put_bucket_notification_configuration(bucket_name,
self._notification_config_from_xml(body))
return ""
except KeyError:
raise MalformedXML()
except Exception as e:
raise e
else: else:
if body: if body:
try: try:
@ -515,6 +547,7 @@ class ResponseObject(_TemplateEnvironmentMixin):
def toint(i): def toint(i):
return int(i) if i else None return int(i) if i else None
begin, end = map(toint, rspec.split('-')) begin, end = map(toint, rspec.split('-'))
if begin is not None: # byte range if begin is not None: # byte range
end = last if end is None else min(end, last) end = last if end is None else min(end, last)
@ -631,7 +664,7 @@ class ResponseObject(_TemplateEnvironmentMixin):
upload_id = query['uploadId'][0] upload_id = query['uploadId'][0]
part_number = int(query['partNumber'][0]) part_number = int(query['partNumber'][0])
if 'x-amz-copy-source' in request.headers: if 'x-amz-copy-source' in request.headers:
src = request.headers.get("x-amz-copy-source").lstrip("/") src = unquote(request.headers.get("x-amz-copy-source")).lstrip("/")
src_bucket, src_key = src.split("/", 1) src_bucket, src_key = src.split("/", 1)
src_range = request.headers.get( src_range = request.headers.get(
'x-amz-copy-source-range', '').split("bytes=")[-1] 'x-amz-copy-source-range', '').split("bytes=")[-1]
@ -673,7 +706,7 @@ class ResponseObject(_TemplateEnvironmentMixin):
if 'x-amz-copy-source' in request.headers: if 'x-amz-copy-source' in request.headers:
# Copy key # Copy key
src_key_parsed = urlparse(request.headers.get("x-amz-copy-source")) src_key_parsed = urlparse(unquote(request.headers.get("x-amz-copy-source")))
src_bucket, src_key = src_key_parsed.path.lstrip("/").split("/", 1) src_bucket, src_key = src_key_parsed.path.lstrip("/").split("/", 1)
src_version_id = parse_qs(src_key_parsed.query).get( src_version_id = parse_qs(src_key_parsed.query).get(
'versionId', [None])[0] 'versionId', [None])[0]
@ -731,6 +764,58 @@ class ResponseObject(_TemplateEnvironmentMixin):
else: else:
return 404, response_headers, "" return 404, response_headers, ""
def _acl_from_xml(self, xml):
parsed_xml = xmltodict.parse(xml)
if not parsed_xml.get("AccessControlPolicy"):
raise MalformedACLError()
# The owner is needed for some reason...
if not parsed_xml["AccessControlPolicy"].get("Owner"):
# TODO: Validate that the Owner is actually correct.
raise MalformedACLError()
# If empty, then no ACLs:
if parsed_xml["AccessControlPolicy"].get("AccessControlList") is None:
return []
if not parsed_xml["AccessControlPolicy"]["AccessControlList"].get("Grant"):
raise MalformedACLError()
permissions = [
"READ",
"WRITE",
"READ_ACP",
"WRITE_ACP",
"FULL_CONTROL"
]
if not isinstance(parsed_xml["AccessControlPolicy"]["AccessControlList"]["Grant"], list):
parsed_xml["AccessControlPolicy"]["AccessControlList"]["Grant"] = \
[parsed_xml["AccessControlPolicy"]["AccessControlList"]["Grant"]]
grants = self._get_grants_from_xml(parsed_xml["AccessControlPolicy"]["AccessControlList"]["Grant"],
MalformedACLError, permissions)
return FakeAcl(grants)
def _get_grants_from_xml(self, grant_list, exception_type, permissions):
grants = []
for grant in grant_list:
if grant.get("Permission", "") not in permissions:
raise exception_type()
if grant["Grantee"].get("@xsi:type", "") not in ["CanonicalUser", "AmazonCustomerByEmail", "Group"]:
raise exception_type()
# TODO: Verify that the proper grantee data is supplied based on the type.
grants.append(FakeGrant(
[FakeGrantee(id=grant["Grantee"].get("ID", ""), display_name=grant["Grantee"].get("DisplayName", ""),
uri=grant["Grantee"].get("URI", ""))],
[grant["Permission"]])
)
return grants
def _acl_from_headers(self, headers): def _acl_from_headers(self, headers):
canned_acl = headers.get('x-amz-acl', '') canned_acl = headers.get('x-amz-acl', '')
if canned_acl: if canned_acl:
@ -814,6 +899,110 @@ class ResponseObject(_TemplateEnvironmentMixin):
return [parsed_xml["CORSConfiguration"]["CORSRule"]] return [parsed_xml["CORSConfiguration"]["CORSRule"]]
def _logging_from_xml(self, xml):
parsed_xml = xmltodict.parse(xml)
if not parsed_xml["BucketLoggingStatus"].get("LoggingEnabled"):
return {}
if not parsed_xml["BucketLoggingStatus"]["LoggingEnabled"].get("TargetBucket"):
raise MalformedXML()
if not parsed_xml["BucketLoggingStatus"]["LoggingEnabled"].get("TargetPrefix"):
parsed_xml["BucketLoggingStatus"]["LoggingEnabled"]["TargetPrefix"] = ""
# Get the ACLs:
if parsed_xml["BucketLoggingStatus"]["LoggingEnabled"].get("TargetGrants"):
permissions = [
"READ",
"WRITE",
"FULL_CONTROL"
]
if not isinstance(parsed_xml["BucketLoggingStatus"]["LoggingEnabled"]["TargetGrants"]["Grant"], list):
target_grants = self._get_grants_from_xml(
[parsed_xml["BucketLoggingStatus"]["LoggingEnabled"]["TargetGrants"]["Grant"]],
MalformedXML,
permissions
)
else:
target_grants = self._get_grants_from_xml(
parsed_xml["BucketLoggingStatus"]["LoggingEnabled"]["TargetGrants"]["Grant"],
MalformedXML,
permissions
)
parsed_xml["BucketLoggingStatus"]["LoggingEnabled"]["TargetGrants"] = target_grants
return parsed_xml["BucketLoggingStatus"]["LoggingEnabled"]
def _notification_config_from_xml(self, xml):
parsed_xml = xmltodict.parse(xml)
if not len(parsed_xml["NotificationConfiguration"]):
return {}
# The types of notifications, and their required fields (apparently lambda is categorized by the API as
# "CloudFunction"):
notification_fields = [
("Topic", "sns"),
("Queue", "sqs"),
("CloudFunction", "lambda")
]
event_names = [
's3:ReducedRedundancyLostObject',
's3:ObjectCreated:*',
's3:ObjectCreated:Put',
's3:ObjectCreated:Post',
's3:ObjectCreated:Copy',
's3:ObjectCreated:CompleteMultipartUpload',
's3:ObjectRemoved:*',
's3:ObjectRemoved:Delete',
's3:ObjectRemoved:DeleteMarkerCreated'
]
found_notifications = 0 # Tripwire -- if this is not ever set, then there were no notifications
for name, arn_string in notification_fields:
# 1st verify that the proper notification configuration has been passed in (with an ARN that is close
# to being correct -- nothing too complex in the ARN logic):
the_notification = parsed_xml["NotificationConfiguration"].get("{}Configuration".format(name))
if the_notification:
found_notifications += 1
if not isinstance(the_notification, list):
the_notification = parsed_xml["NotificationConfiguration"]["{}Configuration".format(name)] \
= [the_notification]
for n in the_notification:
if not n[name].startswith("arn:aws:{}:".format(arn_string)):
raise InvalidNotificationARN()
# 2nd, verify that the Events list is correct:
assert n["Event"]
if not isinstance(n["Event"], list):
n["Event"] = [n["Event"]]
for event in n["Event"]:
if event not in event_names:
raise InvalidNotificationEvent()
# Parse out the filters:
if n.get("Filter"):
# Error if S3Key is blank:
if not n["Filter"]["S3Key"]:
raise KeyError()
if not isinstance(n["Filter"]["S3Key"]["FilterRule"], list):
n["Filter"]["S3Key"]["FilterRule"] = [n["Filter"]["S3Key"]["FilterRule"]]
for filter_rule in n["Filter"]["S3Key"]["FilterRule"]:
assert filter_rule["Name"] in ["suffix", "prefix"]
assert filter_rule["Value"]
if not found_notifications:
return {}
return parsed_xml["NotificationConfiguration"]
def _key_response_delete(self, bucket_name, query, key_name, headers): def _key_response_delete(self, bucket_name, query, key_name, headers):
if query.get('uploadId'): if query.get('uploadId'):
upload_id = query['uploadId'][0] upload_id = query['uploadId'][0]
@ -987,7 +1176,30 @@ S3_BUCKET_LIFECYCLE_CONFIGURATION = """<?xml version="1.0" encoding="UTF-8"?>
{% for rule in rules %} {% for rule in rules %}
<Rule> <Rule>
<ID>{{ rule.id }}</ID> <ID>{{ rule.id }}</ID>
{% if rule.filter %}
<Filter>
<Prefix>{{ rule.filter.prefix }}</Prefix>
{% if rule.filter.tag %}
<Tag>
<Key>{{ rule.filter.tag.key }}</Key>
<Value>{{ rule.filter.tag.value }}</Value>
</Tag>
{% endif %}
{% if rule.filter.and_filter %}
<And>
<Prefix>{{ rule.filter.and_filter.prefix }}</Prefix>
{% for tag in rule.filter.and_filter.tags %}
<Tag>
<Key>{{ tag.key }}</Key>
<Value>{{ tag.value }}</Value>
</Tag>
{% endfor %}
</And>
{% endif %}
</Filter>
{% else %}
<Prefix>{{ rule.prefix if rule.prefix != None }}</Prefix> <Prefix>{{ rule.prefix if rule.prefix != None }}</Prefix>
{% endif %}
<Status>{{ rule.status }}</Status> <Status>{{ rule.status }}</Status>
{% if rule.storage_class %} {% if rule.storage_class %}
<Transition> <Transition>
@ -1000,7 +1212,7 @@ S3_BUCKET_LIFECYCLE_CONFIGURATION = """<?xml version="1.0" encoding="UTF-8"?>
<StorageClass>{{ rule.storage_class }}</StorageClass> <StorageClass>{{ rule.storage_class }}</StorageClass>
</Transition> </Transition>
{% endif %} {% endif %}
{% if rule.expiration_days or rule.expiration_date %} {% if rule.expiration_days or rule.expiration_date or rule.expired_object_delete_marker %}
<Expiration> <Expiration>
{% if rule.expiration_days %} {% if rule.expiration_days %}
<Days>{{ rule.expiration_days }}</Days> <Days>{{ rule.expiration_days }}</Days>
@ -1008,6 +1220,9 @@ S3_BUCKET_LIFECYCLE_CONFIGURATION = """<?xml version="1.0" encoding="UTF-8"?>
{% if rule.expiration_date %} {% if rule.expiration_date %}
<Date>{{ rule.expiration_date }}</Date> <Date>{{ rule.expiration_date }}</Date>
{% endif %} {% endif %}
{% if rule.expired_object_delete_marker %}
<ExpiredObjectDeleteMarker>{{ rule.expired_object_delete_marker }}</ExpiredObjectDeleteMarker>
{% endif %}
</Expiration> </Expiration>
{% endif %} {% endif %}
</Rule> </Rule>
@ -1322,3 +1537,105 @@ S3_NO_CORS_CONFIG = """<?xml version="1.0" encoding="UTF-8"?>
<HostId>9Gjjt1m+cjU4OPvX9O9/8RuvnG41MRb/18Oux2o5H5MY7ISNTlXN+Dz9IG62/ILVxhAGI0qyPfg=</HostId> <HostId>9Gjjt1m+cjU4OPvX9O9/8RuvnG41MRb/18Oux2o5H5MY7ISNTlXN+Dz9IG62/ILVxhAGI0qyPfg=</HostId>
</Error> </Error>
""" """
S3_LOGGING_CONFIG = """<?xml version="1.0" encoding="UTF-8"?>
<BucketLoggingStatus xmlns="http://doc.s3.amazonaws.com/2006-03-01">
<LoggingEnabled>
<TargetBucket>{{ logging["TargetBucket"] }}</TargetBucket>
<TargetPrefix>{{ logging["TargetPrefix"] }}</TargetPrefix>
{% if logging.get("TargetGrants") %}
<TargetGrants>
{% for grant in logging["TargetGrants"] %}
<Grant>
<Grantee xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:type="{{ grant.grantees[0].type }}">
{% if grant.grantees[0].uri %}
<URI>{{ grant.grantees[0].uri }}</URI>
{% endif %}
{% if grant.grantees[0].id %}
<ID>{{ grant.grantees[0].id }}</ID>
{% endif %}
{% if grant.grantees[0].display_name %}
<DisplayName>{{ grant.grantees[0].display_name }}</DisplayName>
{% endif %}
</Grantee>
<Permission>{{ grant.permissions[0] }}</Permission>
</Grant>
{% endfor %}
</TargetGrants>
{% endif %}
</LoggingEnabled>
</BucketLoggingStatus>
"""
S3_NO_LOGGING_CONFIG = """<?xml version="1.0" encoding="UTF-8"?>
<BucketLoggingStatus xmlns="http://doc.s3.amazonaws.com/2006-03-01" />
"""
S3_GET_BUCKET_NOTIFICATION_CONFIG = """<?xml version="1.0" encoding="UTF-8"?>
<NotificationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
{% for topic in bucket.notification_configuration.topic %}
<TopicConfiguration>
<Id>{{ topic.id }}</Id>
<Topic>{{ topic.arn }}</Topic>
{% for event in topic.events %}
<Event>{{ event }}</Event>
{% endfor %}
{% if topic.filters %}
<Filter>
<S3Key>
{% for rule in topic.filters["S3Key"]["FilterRule"] %}
<FilterRule>
<Name>{{ rule["Name"] }}</Name>
<Value>{{ rule["Value"] }}</Value>
</FilterRule>
{% endfor %}
</S3Key>
</Filter>
{% endif %}
</TopicConfiguration>
{% endfor %}
{% for queue in bucket.notification_configuration.queue %}
<QueueConfiguration>
<Id>{{ queue.id }}</Id>
<Queue>{{ queue.arn }}</Queue>
{% for event in queue.events %}
<Event>{{ event }}</Event>
{% endfor %}
{% if queue.filters %}
<Filter>
<S3Key>
{% for rule in queue.filters["S3Key"]["FilterRule"] %}
<FilterRule>
<Name>{{ rule["Name"] }}</Name>
<Value>{{ rule["Value"] }}</Value>
</FilterRule>
{% endfor %}
</S3Key>
</Filter>
{% endif %}
</QueueConfiguration>
{% endfor %}
{% for cf in bucket.notification_configuration.cloud_function %}
<CloudFunctionConfiguration>
<Id>{{ cf.id }}</Id>
<CloudFunction>{{ cf.arn }}</CloudFunction>
{% for event in cf.events %}
<Event>{{ event }}</Event>
{% endfor %}
{% if cf.filters %}
<Filter>
<S3Key>
{% for rule in cf.filters["S3Key"]["FilterRule"] %}
<FilterRule>
<Name>{{ rule["Name"] }}</Name>
<Value>{{ rule["Value"] }}</Value>
</FilterRule>
{% endfor %}
</S3Key>
</Filter>
{% endif %}
</CloudFunctionConfiguration>
{% endfor %}
</NotificationConfiguration>
"""

View File

@ -1,4 +1,6 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import logging
import os
from boto.s3.key import Key from boto.s3.key import Key
import re import re
@ -6,10 +8,16 @@ import six
from six.moves.urllib.parse import urlparse, unquote from six.moves.urllib.parse import urlparse, unquote
import sys import sys
log = logging.getLogger(__name__)
bucket_name_regex = re.compile("(.+).s3(.*).amazonaws.com") bucket_name_regex = re.compile("(.+).s3(.*).amazonaws.com")
def bucket_name_from_url(url): def bucket_name_from_url(url):
if os.environ.get('S3_IGNORE_SUBDOMAIN_BUCKETNAME', '') in ['1', 'true']:
return None
domain = urlparse(url).netloc domain = urlparse(url).netloc
if domain.startswith('www.'): if domain.startswith('www.'):
@ -27,6 +35,20 @@ def bucket_name_from_url(url):
return None return None
REGION_URL_REGEX = re.compile(
r'^https?://(s3[-\.](?P<region1>.+)\.amazonaws\.com/(.+)|'
r'(.+)\.s3-(?P<region2>.+)\.amazonaws\.com)/?')
def parse_region_from_url(url):
match = REGION_URL_REGEX.search(url)
if match:
region = match.group('region1') or match.group('region2')
else:
region = 'us-east-1'
return region
def metadata_from_headers(headers): def metadata_from_headers(headers):
metadata = {} metadata = {}
meta_regex = re.compile( meta_regex = re.compile(

View File

@ -4,11 +4,12 @@ import datetime
import uuid import uuid
import json import json
import boto.sns
import requests import requests
import six import six
import re import re
from boto3 import Session
from moto.compat import OrderedDict from moto.compat import OrderedDict
from moto.core import BaseBackend, BaseModel from moto.core import BaseBackend, BaseModel
from moto.core.utils import iso_8601_datetime_with_milliseconds from moto.core.utils import iso_8601_datetime_with_milliseconds
@ -42,11 +43,12 @@ class Topic(BaseModel):
self.subscriptions_confimed = 0 self.subscriptions_confimed = 0
self.subscriptions_deleted = 0 self.subscriptions_deleted = 0
def publish(self, message, subject=None): def publish(self, message, subject=None, message_attributes=None):
message_id = six.text_type(uuid.uuid4()) message_id = six.text_type(uuid.uuid4())
subscriptions, _ = self.sns_backend.list_subscriptions(self.arn) subscriptions, _ = self.sns_backend.list_subscriptions(self.arn)
for subscription in subscriptions: for subscription in subscriptions:
subscription.publish(message, message_id, subject=subject) subscription.publish(message, message_id, subject=subject,
message_attributes=message_attributes)
return message_id return message_id
def get_cfn_attribute(self, attribute_name): def get_cfn_attribute(self, attribute_name):
@ -81,25 +83,65 @@ class Subscription(BaseModel):
self.protocol = protocol self.protocol = protocol
self.arn = make_arn_for_subscription(self.topic.arn) self.arn = make_arn_for_subscription(self.topic.arn)
self.attributes = {} self.attributes = {}
self._filter_policy = None # filter policy as a dict, not json.
self.confirmed = False self.confirmed = False
def publish(self, message, message_id, subject=None): def publish(self, message, message_id, subject=None,
message_attributes=None):
if not self._matches_filter_policy(message_attributes):
return
if self.protocol == 'sqs': if self.protocol == 'sqs':
queue_name = self.endpoint.split(":")[-1] queue_name = self.endpoint.split(":")[-1]
region = self.endpoint.split(":")[3] region = self.endpoint.split(":")[3]
enveloped_message = json.dumps(self.get_post_data(message, message_id, subject), sort_keys=True, indent=2, separators=(',', ': ')) enveloped_message = json.dumps(self.get_post_data(message, message_id, subject, message_attributes=message_attributes), sort_keys=True, indent=2, separators=(',', ': '))
sqs_backends[region].send_message(queue_name, enveloped_message) sqs_backends[region].send_message(queue_name, enveloped_message)
elif self.protocol in ['http', 'https']: elif self.protocol in ['http', 'https']:
post_data = self.get_post_data(message, message_id, subject) post_data = self.get_post_data(message, message_id, subject)
requests.post(self.endpoint, json=post_data) requests.post(self.endpoint, json=post_data)
elif self.protocol == 'lambda': elif self.protocol == 'lambda':
# TODO: support bad function name # TODO: support bad function name
function_name = self.endpoint.split(":")[-1] # http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html
region = self.arn.split(':')[3] arr = self.endpoint.split(":")
lambda_backends[region].send_message(function_name, message, subject=subject) region = arr[3]
qualifier = None
if len(arr) == 7:
assert arr[5] == 'function'
function_name = arr[-1]
elif len(arr) == 8:
assert arr[5] == 'function'
qualifier = arr[-1]
function_name = arr[-2]
else:
assert False
def get_post_data(self, message, message_id, subject): lambda_backends[region].send_message(function_name, message, subject=subject, qualifier=qualifier)
return {
def _matches_filter_policy(self, message_attributes):
# TODO: support Anything-but matching, prefix matching and
# numeric value matching.
if not self._filter_policy:
return True
if message_attributes is None:
message_attributes = {}
def _field_match(field, rules, message_attributes):
if field not in message_attributes:
return False
for rule in rules:
if isinstance(rule, six.string_types):
# only string value matching is supported
if message_attributes[field]['Value'] == rule:
return True
return False
return all(_field_match(field, rules, message_attributes)
for field, rules in six.iteritems(self._filter_policy))
def get_post_data(
self, message, message_id, subject, message_attributes=None):
post_data = {
"Type": "Notification", "Type": "Notification",
"MessageId": message_id, "MessageId": message_id,
"TopicArn": self.topic.arn, "TopicArn": self.topic.arn,
@ -111,6 +153,9 @@ class Subscription(BaseModel):
"SigningCertURL": "https://sns.us-east-1.amazonaws.com/SimpleNotificationService-f3ecfb7224c7233fe7bb5f59f96de52f.pem", "SigningCertURL": "https://sns.us-east-1.amazonaws.com/SimpleNotificationService-f3ecfb7224c7233fe7bb5f59f96de52f.pem",
"UnsubscribeURL": "https://sns.us-east-1.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:us-east-1:123456789012:some-topic:2bcfbf39-05c3-41de-beaa-fcfcc21c8f55" "UnsubscribeURL": "https://sns.us-east-1.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=arn:aws:sns:us-east-1:123456789012:some-topic:2bcfbf39-05c3-41de-beaa-fcfcc21c8f55"
} }
if message_attributes:
post_data["MessageAttributes"] = message_attributes
return post_data
class PlatformApplication(BaseModel): class PlatformApplication(BaseModel):
@ -247,11 +292,21 @@ class SNSBackend(BaseBackend):
setattr(topic, attribute_name, attribute_value) setattr(topic, attribute_name, attribute_value)
def subscribe(self, topic_arn, endpoint, protocol): def subscribe(self, topic_arn, endpoint, protocol):
# AWS doesn't create duplicates
old_subscription = self._find_subscription(topic_arn, endpoint, protocol)
if old_subscription:
return old_subscription
topic = self.get_topic(topic_arn) topic = self.get_topic(topic_arn)
subscription = Subscription(topic, endpoint, protocol) subscription = Subscription(topic, endpoint, protocol)
self.subscriptions[subscription.arn] = subscription self.subscriptions[subscription.arn] = subscription
return subscription return subscription
def _find_subscription(self, topic_arn, endpoint, protocol):
for subscription in self.subscriptions.values():
if subscription.topic.arn == topic_arn and subscription.endpoint == endpoint and subscription.protocol == protocol:
return subscription
return None
def unsubscribe(self, subscription_arn): def unsubscribe(self, subscription_arn):
self.subscriptions.pop(subscription_arn) self.subscriptions.pop(subscription_arn)
@ -264,13 +319,15 @@ class SNSBackend(BaseBackend):
else: else:
return self._get_values_nexttoken(self.subscriptions, next_token) return self._get_values_nexttoken(self.subscriptions, next_token)
def publish(self, arn, message, subject=None): def publish(self, arn, message, subject=None, message_attributes=None):
if subject is not None and len(subject) >= 100: if subject is not None and len(subject) > 100:
# Note that the AWS docs around length are wrong: https://github.com/spulec/moto/issues/1503
raise ValueError('Subject must be less than 100 characters') raise ValueError('Subject must be less than 100 characters')
try: try:
topic = self.get_topic(arn) topic = self.get_topic(arn)
message_id = topic.publish(message, subject=subject) message_id = topic.publish(message, subject=subject,
message_attributes=message_attributes)
except SNSNotFoundError: except SNSNotFoundError:
endpoint = self.get_endpoint(arn) endpoint = self.get_endpoint(arn)
message_id = endpoint.publish(message) message_id = endpoint.publish(message)
@ -342,7 +399,7 @@ class SNSBackend(BaseBackend):
return subscription.attributes return subscription.attributes
def set_subscription_attributes(self, arn, name, value): def set_subscription_attributes(self, arn, name, value):
if name not in ['RawMessageDelivery', 'DeliveryPolicy']: if name not in ['RawMessageDelivery', 'DeliveryPolicy', 'FilterPolicy']:
raise SNSInvalidParameter('AttributeName') raise SNSInvalidParameter('AttributeName')
# TODO: should do validation # TODO: should do validation
@ -353,10 +410,13 @@ class SNSBackend(BaseBackend):
subscription.attributes[name] = value subscription.attributes[name] = value
if name == 'FilterPolicy':
subscription._filter_policy = json.loads(value)
sns_backends = {} sns_backends = {}
for region in boto.sns.regions(): for region in Session().get_available_regions('sns'):
sns_backends[region.name] = SNSBackend(region.name) sns_backends[region] = SNSBackend(region)
DEFAULT_TOPIC_POLICY = { DEFAULT_TOPIC_POLICY = {

View File

@ -6,7 +6,7 @@ from collections import defaultdict
from moto.core.responses import BaseResponse from moto.core.responses import BaseResponse
from moto.core.utils import camelcase_to_underscores from moto.core.utils import camelcase_to_underscores
from .models import sns_backends from .models import sns_backends
from .exceptions import SNSNotFoundError from .exceptions import SNSNotFoundError, InvalidParameterValue
from .utils import is_e164 from .utils import is_e164
@ -30,6 +30,49 @@ class SNSResponse(BaseResponse):
in attributes in attributes
) )
def _parse_message_attributes(self, prefix='', value_namespace='Value.'):
message_attributes = self._get_object_map(
'MessageAttributes.entry',
name='Name',
value='Value'
)
# SNS converts some key names before forwarding messages
# DataType -> Type, StringValue -> Value, BinaryValue -> Value
transformed_message_attributes = {}
for name, value in message_attributes.items():
# validation
data_type = value['DataType']
if not data_type:
raise InvalidParameterValue(
"The message attribute '{0}' must contain non-empty "
"message attribute value.".format(name))
data_type_parts = data_type.split('.')
if (len(data_type_parts) > 2 or
data_type_parts[0] not in ['String', 'Binary', 'Number']):
raise InvalidParameterValue(
"The message attribute '{0}' has an invalid message "
"attribute type, the set of supported type prefixes is "
"Binary, Number, and String.".format(name))
transform_value = None
if 'StringValue' in value:
transform_value = value['StringValue']
elif 'BinaryValue' in value:
transform_value = value['BinaryValue']
if not transform_value:
raise InvalidParameterValue(
"The message attribute '{0}' must contain non-empty "
"message attribute value for message attribute "
"type '{1}'.".format(name, data_type[0]))
# transformation
transformed_message_attributes[name] = {
'Type': data_type, 'Value': transform_value
}
return transformed_message_attributes
def create_topic(self): def create_topic(self):
name = self._get_param('Name') name = self._get_param('Name')
topic = self.backend.create_topic(name) topic = self.backend.create_topic(name)
@ -241,6 +284,8 @@ class SNSResponse(BaseResponse):
phone_number = self._get_param('PhoneNumber') phone_number = self._get_param('PhoneNumber')
subject = self._get_param('Subject') subject = self._get_param('Subject')
message_attributes = self._parse_message_attributes()
if phone_number is not None: if phone_number is not None:
# Check phone is correct syntax (e164) # Check phone is correct syntax (e164)
if not is_e164(phone_number): if not is_e164(phone_number):
@ -265,7 +310,9 @@ class SNSResponse(BaseResponse):
message = self._get_param('Message') message = self._get_param('Message')
try: try:
message_id = self.backend.publish(arn, message, subject=subject) message_id = self.backend.publish(
arn, message, subject=subject,
message_attributes=message_attributes)
except ValueError as err: except ValueError as err:
error_response = self._error('InvalidParameter', str(err)) error_response = self._error('InvalidParameter', str(err))
return error_response, dict(status=400) return error_response, dict(status=400)

View File

@ -38,6 +38,8 @@ class Message(BaseModel):
self.sent_timestamp = None self.sent_timestamp = None
self.approximate_first_receive_timestamp = None self.approximate_first_receive_timestamp = None
self.approximate_receive_count = 0 self.approximate_receive_count = 0
self.deduplication_id = None
self.group_id = None
self.visible_at = 0 self.visible_at = 0
self.delayed_until = 0 self.delayed_until = 0
@ -152,63 +154,86 @@ class Message(BaseModel):
class Queue(BaseModel): class Queue(BaseModel):
camelcase_attributes = ['ApproximateNumberOfMessages', base_attributes = ['ApproximateNumberOfMessages',
'ApproximateNumberOfMessagesDelayed', 'ApproximateNumberOfMessagesDelayed',
'ApproximateNumberOfMessagesNotVisible', 'ApproximateNumberOfMessagesNotVisible',
'ContentBasedDeduplication', 'CreatedTimestamp',
'CreatedTimestamp', 'DelaySeconds',
'DelaySeconds', 'LastModifiedTimestamp',
'FifoQueue', 'MaximumMessageSize',
'KmsDataKeyReusePeriodSeconds', 'MessageRetentionPeriod',
'KmsMasterKeyId', 'QueueArn',
'LastModifiedTimestamp', 'ReceiveMessageWaitTimeSeconds',
'MaximumMessageSize', 'VisibilityTimeout']
'MessageRetentionPeriod', fifo_attributes = ['FifoQueue',
'QueueArn', 'ContentBasedDeduplication']
'ReceiveMessageWaitTimeSeconds', kms_attributes = ['KmsDataKeyReusePeriodSeconds',
'VisibilityTimeout', 'KmsMasterKeyId']
'WaitTimeSeconds'] ALLOWED_PERMISSIONS = ('*', 'ChangeMessageVisibility', 'DeleteMessage',
ALLOWED_PERMISSIONS = ('*', 'ChangeMessageVisibility', 'DeleteMessage', 'GetQueueAttributes', 'GetQueueAttributes', 'GetQueueUrl',
'GetQueueUrl', 'ReceiveMessage', 'SendMessage') 'ReceiveMessage', 'SendMessage')
def __init__(self, name, region, **kwargs): def __init__(self, name, region, **kwargs):
self.name = name self.name = name
self.visibility_timeout = int(kwargs.get('VisibilityTimeout', 30))
self.region = region self.region = region
self.tags = {} self.tags = {}
self.permissions = {}
self._messages = [] self._messages = []
now = unix_time() now = unix_time()
# kwargs can also have:
# [Policy, RedrivePolicy]
self.fifo_queue = kwargs.get('FifoQueue', 'false') == 'true'
self.content_based_deduplication = kwargs.get('ContentBasedDeduplication', 'false') == 'true'
self.kms_master_key_id = kwargs.get('KmsMasterKeyId', 'alias/aws/sqs')
self.kms_data_key_reuse_period_seconds = int(kwargs.get('KmsDataKeyReusePeriodSeconds', 300))
self.created_timestamp = now self.created_timestamp = now
self.delay_seconds = int(kwargs.get('DelaySeconds', 0)) self.queue_arn = 'arn:aws:sqs:{0}:123456789012:{1}'.format(self.region,
self.last_modified_timestamp = now self.name)
self.maximum_message_size = int(kwargs.get('MaximumMessageSize', 64 << 10))
self.message_retention_period = int(kwargs.get('MessageRetentionPeriod', 86400 * 4)) # four days
self.queue_arn = 'arn:aws:sqs:{0}:123456789012:{1}'.format(self.region, self.name)
self.receive_message_wait_time_seconds = int(kwargs.get('ReceiveMessageWaitTimeSeconds', 0))
self.permissions = {}
# wait_time_seconds will be set to immediate return messages
self.wait_time_seconds = int(kwargs.get('WaitTimeSeconds', 0))
self.redrive_policy = {}
self.dead_letter_queue = None self.dead_letter_queue = None
if 'RedrivePolicy' in kwargs: # default settings for a non fifo queue
self._setup_dlq(kwargs['RedrivePolicy']) defaults = {
'ContentBasedDeduplication': 'false',
'DelaySeconds': 0,
'FifoQueue': 'false',
'KmsDataKeyReusePeriodSeconds': 300, # five minutes
'KmsMasterKeyId': None,
'MaximumMessageSize': int(64 << 10),
'MessageRetentionPeriod': 86400 * 4, # four days
'Policy': None,
'ReceiveMessageWaitTimeSeconds': 0,
'RedrivePolicy': None,
'VisibilityTimeout': 30,
}
defaults.update(kwargs)
self._set_attributes(defaults, now)
# Check some conditions # Check some conditions
if self.fifo_queue and not self.name.endswith('.fifo'): if self.fifo_queue and not self.name.endswith('.fifo'):
raise MessageAttributesInvalid('Queue name must end in .fifo for FIFO queues') raise MessageAttributesInvalid('Queue name must end in .fifo for FIFO queues')
def _set_attributes(self, attributes, now=None):
if not now:
now = unix_time()
integer_fields = ('DelaySeconds', 'KmsDataKeyreusePeriodSeconds',
'MaximumMessageSize', 'MessageRetentionPeriod',
'ReceiveMessageWaitTime', 'VisibilityTimeout')
bool_fields = ('ContentBasedDeduplication', 'FifoQueue')
for key, value in six.iteritems(attributes):
if key in integer_fields:
value = int(value)
if key in bool_fields:
value = value == "true"
if key == 'RedrivePolicy' and value is not None:
continue
setattr(self, camelcase_to_underscores(key), value)
if attributes.get('RedrivePolicy', None):
self._setup_dlq(attributes['RedrivePolicy'])
self.last_modified_timestamp = now
def _setup_dlq(self, policy_json): def _setup_dlq(self, policy_json):
try: try:
self.redrive_policy = json.loads(policy_json) self.redrive_policy = json.loads(policy_json)
@ -251,8 +276,8 @@ class Queue(BaseModel):
if 'VisibilityTimeout' in properties: if 'VisibilityTimeout' in properties:
queue.visibility_timeout = int(properties['VisibilityTimeout']) queue.visibility_timeout = int(properties['VisibilityTimeout'])
if 'WaitTimeSeconds' in properties: if 'ReceiveMessageWaitTimeSeconds' in properties:
queue.wait_time_seconds = int(properties['WaitTimeSeconds']) queue.receive_message_wait_time_seconds = int(properties['ReceiveMessageWaitTimeSeconds'])
return queue return queue
@classmethod @classmethod
@ -281,11 +306,31 @@ class Queue(BaseModel):
@property @property
def attributes(self): def attributes(self):
result = {} result = {}
for attribute in self.camelcase_attributes:
for attribute in self.base_attributes:
attr = getattr(self, camelcase_to_underscores(attribute)) attr = getattr(self, camelcase_to_underscores(attribute))
if isinstance(attr, bool):
attr = str(attr).lower()
result[attribute] = attr result[attribute] = attr
if self.fifo_queue:
for attribute in self.fifo_attributes:
attr = getattr(self, camelcase_to_underscores(attribute))
result[attribute] = attr
if self.kms_master_key_id:
for attribute in self.kms_attributes:
attr = getattr(self, camelcase_to_underscores(attribute))
result[attribute] = attr
if self.policy:
result['Policy'] = self.policy
if self.redrive_policy:
result['RedrivePolicy'] = json.dumps(self.redrive_policy)
for key in result:
if isinstance(result[key], bool):
result[key] = str(result[key]).lower()
return result return result
def url(self, request_url): def url(self, request_url):
@ -352,12 +397,12 @@ class SQSBackend(BaseBackend):
return self.queues.pop(queue_name) return self.queues.pop(queue_name)
return False return False
def set_queue_attribute(self, queue_name, key, value): def set_queue_attributes(self, queue_name, attributes):
queue = self.get_queue(queue_name) queue = self.get_queue(queue_name)
setattr(queue, key, value) queue._set_attributes(attributes)
return queue return queue
def send_message(self, queue_name, message_body, message_attributes=None, delay_seconds=None): def send_message(self, queue_name, message_body, message_attributes=None, delay_seconds=None, deduplication_id=None, group_id=None):
queue = self.get_queue(queue_name) queue = self.get_queue(queue_name)
@ -369,6 +414,12 @@ class SQSBackend(BaseBackend):
message_id = get_random_message_id() message_id = get_random_message_id()
message = Message(message_id, message_body) message = Message(message_id, message_body)
# Attributes, but not *message* attributes
if deduplication_id is not None:
message.deduplication_id = deduplication_id
if group_id is not None:
message.group_id = group_id
if message_attributes: if message_attributes:
message.message_attributes = message_attributes message.message_attributes = message_attributes

View File

@ -4,7 +4,7 @@ import re
from six.moves.urllib.parse import urlparse from six.moves.urllib.parse import urlparse
from moto.core.responses import BaseResponse from moto.core.responses import BaseResponse
from moto.core.utils import camelcase_to_underscores, amz_crc32, amzn_request_id from moto.core.utils import amz_crc32, amzn_request_id
from .utils import parse_message_attributes from .utils import parse_message_attributes
from .models import sqs_backends from .models import sqs_backends
from .exceptions import ( from .exceptions import (
@ -30,7 +30,7 @@ class SQSResponse(BaseResponse):
@property @property
def attribute(self): def attribute(self):
if not hasattr(self, '_attribute'): if not hasattr(self, '_attribute'):
self._attribute = self._get_map_prefix('Attribute', key_end='Name', value_end='Value') self._attribute = self._get_map_prefix('Attribute', key_end='.Name', value_end='.Value')
return self._attribute return self._attribute
def _get_queue_name(self): def _get_queue_name(self):
@ -87,7 +87,8 @@ class SQSResponse(BaseResponse):
try: try:
queue = self.sqs_backend.get_queue(queue_name) queue = self.sqs_backend.get_queue(queue_name)
except QueueDoesNotExist as e: except QueueDoesNotExist as e:
return self._error('QueueDoesNotExist', e.description) return self._error('AWS.SimpleQueueService.NonExistentQueue',
e.description)
if queue: if queue:
template = self.response_template(GET_QUEUE_URL_RESPONSE) template = self.response_template(GET_QUEUE_URL_RESPONSE)
@ -171,7 +172,8 @@ class SQSResponse(BaseResponse):
try: try:
queue = self.sqs_backend.get_queue(queue_name) queue = self.sqs_backend.get_queue(queue_name)
except QueueDoesNotExist as e: except QueueDoesNotExist as e:
return self._error('QueueDoesNotExist', e.description) return self._error('AWS.SimpleQueueService.NonExistentQueue',
e.description)
template = self.response_template(GET_QUEUE_ATTRIBUTES_RESPONSE) template = self.response_template(GET_QUEUE_ATTRIBUTES_RESPONSE)
return template.render(queue=queue) return template.render(queue=queue)
@ -179,9 +181,8 @@ class SQSResponse(BaseResponse):
def set_queue_attributes(self): def set_queue_attributes(self):
# TODO validate self.get_param('QueueUrl') # TODO validate self.get_param('QueueUrl')
queue_name = self._get_queue_name() queue_name = self._get_queue_name()
for key, value in self.attribute.items(): self.sqs_backend.set_queue_attributes(queue_name, self.attribute)
key = camelcase_to_underscores(key)
self.sqs_backend.set_queue_attribute(queue_name, key, value)
return SET_QUEUE_ATTRIBUTE_RESPONSE return SET_QUEUE_ATTRIBUTE_RESPONSE
def delete_queue(self): def delete_queue(self):
@ -197,6 +198,8 @@ class SQSResponse(BaseResponse):
def send_message(self): def send_message(self):
message = self._get_param('MessageBody') message = self._get_param('MessageBody')
delay_seconds = int(self._get_param('DelaySeconds', 0)) delay_seconds = int(self._get_param('DelaySeconds', 0))
message_group_id = self._get_param("MessageGroupId")
message_dedupe_id = self._get_param("MessageDeduplicationId")
if len(message) > MAXIMUM_MESSAGE_LENGTH: if len(message) > MAXIMUM_MESSAGE_LENGTH:
return ERROR_TOO_LONG_RESPONSE, dict(status=400) return ERROR_TOO_LONG_RESPONSE, dict(status=400)
@ -212,7 +215,9 @@ class SQSResponse(BaseResponse):
queue_name, queue_name,
message, message,
message_attributes=message_attributes, message_attributes=message_attributes,
delay_seconds=delay_seconds delay_seconds=delay_seconds,
deduplication_id=message_dedupe_id,
group_id=message_group_id
) )
template = self.response_template(SEND_MESSAGE_RESPONSE) template = self.response_template(SEND_MESSAGE_RESPONSE)
return template.render(message=message, message_attributes=message_attributes) return template.render(message=message, message_attributes=message_attributes)
@ -320,10 +325,26 @@ class SQSResponse(BaseResponse):
except TypeError: except TypeError:
message_count = DEFAULT_RECEIVED_MESSAGES message_count = DEFAULT_RECEIVED_MESSAGES
if message_count < 1 or message_count > 10:
return self._error(
"InvalidParameterValue",
"An error occurred (InvalidParameterValue) when calling "
"the ReceiveMessage operation: Value %s for parameter "
"MaxNumberOfMessages is invalid. Reason: must be between "
"1 and 10, if provided." % message_count)
try: try:
wait_time = int(self.querystring.get("WaitTimeSeconds")[0]) wait_time = int(self.querystring.get("WaitTimeSeconds")[0])
except TypeError: except TypeError:
wait_time = queue.wait_time_seconds wait_time = queue.receive_message_wait_time_seconds
if wait_time < 0 or wait_time > 20:
return self._error(
"InvalidParameterValue",
"An error occurred (InvalidParameterValue) when calling "
"the ReceiveMessage operation: Value %s for parameter "
"WaitTimeSeconds is invalid. Reason: must be &lt;= 0 and "
"&gt;= 20 if provided." % wait_time)
try: try:
visibility_timeout = self._get_validated_visibility_timeout() visibility_timeout = self._get_validated_visibility_timeout()
@ -490,6 +511,18 @@ RECEIVE_MESSAGE_RESPONSE = """<ReceiveMessageResponse>
<Name>ApproximateFirstReceiveTimestamp</Name> <Name>ApproximateFirstReceiveTimestamp</Name>
<Value>{{ message.approximate_first_receive_timestamp }}</Value> <Value>{{ message.approximate_first_receive_timestamp }}</Value>
</Attribute> </Attribute>
{% if message.deduplication_id is not none %}
<Attribute>
<Name>MessageDeduplicationId</Name>
<Value>{{ message.deduplication_id }}</Value>
</Attribute>
{% endif %}
{% if message.group_id is not none %}
<Attribute>
<Name>MessageGroupId</Name>
<Value>{{ message.group_id }}</Value>
</Attribute>
{% endif %}
{% if message.message_attributes.items()|count > 0 %} {% if message.message_attributes.items()|count > 0 %}
<MD5OfMessageAttributes>{{- message.attribute_md5 -}}</MD5OfMessageAttributes> <MD5OfMessageAttributes>{{- message.attribute_md5 -}}</MD5OfMessageAttributes>
{% endif %} {% endif %}

View File

@ -5,7 +5,9 @@ from collections import defaultdict
from moto.core import BaseBackend, BaseModel from moto.core import BaseBackend, BaseModel
from moto.ec2 import ec2_backends from moto.ec2 import ec2_backends
import datetime
import time import time
import uuid
class Parameter(BaseModel): class Parameter(BaseModel):
@ -91,7 +93,7 @@ class SimpleSystemManagerBackend(BaseBackend):
result.append(self._parameters[name]) result.append(self._parameters[name])
return result return result
def get_parameters_by_path(self, path, with_decryption, recursive): def get_parameters_by_path(self, path, with_decryption, recursive, filters=None):
"""Implement the get-parameters-by-path-API in the backend.""" """Implement the get-parameters-by-path-API in the backend."""
result = [] result = []
# path could be with or without a trailing /. we handle this # path could be with or without a trailing /. we handle this
@ -102,10 +104,35 @@ class SimpleSystemManagerBackend(BaseBackend):
continue continue
if '/' in param[len(path) + 1:] and not recursive: if '/' in param[len(path) + 1:] and not recursive:
continue continue
if not self._match_filters(self._parameters[param], filters):
continue
result.append(self._parameters[param]) result.append(self._parameters[param])
return result return result
@staticmethod
def _match_filters(parameter, filters=None):
"""Return True if the given parameter matches all the filters"""
for filter_obj in (filters or []):
key = filter_obj['Key']
option = filter_obj.get('Option', 'Equals')
values = filter_obj.get('Values', [])
what = None
if key == 'Type':
what = parameter.type
elif key == 'KeyId':
what = parameter.keyid
if option == 'Equals'\
and not any(what == value for value in values):
return False
elif option == 'BeginsWith'\
and not any(what.startswith(value) for value in values):
return False
# True if no false match (or no filters at all)
return True
def get_parameter(self, name, with_decryption): def get_parameter(self, name, with_decryption):
if name in self._parameters: if name in self._parameters:
return self._parameters[name] return self._parameters[name]
@ -124,6 +151,7 @@ class SimpleSystemManagerBackend(BaseBackend):
last_modified_date = time.time() last_modified_date = time.time()
self._parameters[name] = Parameter( self._parameters[name] = Parameter(
name, value, type, description, keyid, last_modified_date, version) name, value, type, description, keyid, last_modified_date, version)
return version
def add_tags_to_resource(self, resource_type, resource_id, tags): def add_tags_to_resource(self, resource_type, resource_id, tags):
for key, value in tags.items(): for key, value in tags.items():
@ -138,6 +166,39 @@ class SimpleSystemManagerBackend(BaseBackend):
def list_tags_for_resource(self, resource_type, resource_id): def list_tags_for_resource(self, resource_type, resource_id):
return self._resource_tags[resource_type][resource_id] return self._resource_tags[resource_type][resource_id]
def send_command(self, **kwargs):
instances = kwargs.get('InstanceIds', [])
now = datetime.datetime.now()
expires_after = now + datetime.timedelta(0, int(kwargs.get('TimeoutSeconds', 3600)))
return {
'Command': {
'CommandId': str(uuid.uuid4()),
'DocumentName': kwargs['DocumentName'],
'Comment': kwargs.get('Comment'),
'ExpiresAfter': expires_after.isoformat(),
'Parameters': kwargs['Parameters'],
'InstanceIds': kwargs['InstanceIds'],
'Targets': kwargs.get('targets'),
'RequestedDateTime': now.isoformat(),
'Status': 'Success',
'StatusDetails': 'string',
'OutputS3Region': kwargs.get('OutputS3Region'),
'OutputS3BucketName': kwargs.get('OutputS3BucketName'),
'OutputS3KeyPrefix': kwargs.get('OutputS3KeyPrefix'),
'MaxConcurrency': 'string',
'MaxErrors': 'string',
'TargetCount': len(instances),
'CompletedCount': len(instances),
'ErrorCount': 0,
'ServiceRole': kwargs.get('ServiceRoleArn'),
'NotificationConfig': {
'NotificationArn': 'string',
'NotificationEvents': ['Success'],
'NotificationType': 'Command'
}
}
}
ssm_backends = {} ssm_backends = {}
for region, ec2_backend in ec2_backends.items(): for region, ec2_backend in ec2_backends.items():

View File

@ -85,9 +85,10 @@ class SimpleSystemManagerResponse(BaseResponse):
path = self._get_param('Path') path = self._get_param('Path')
with_decryption = self._get_param('WithDecryption') with_decryption = self._get_param('WithDecryption')
recursive = self._get_param('Recursive', False) recursive = self._get_param('Recursive', False)
filters = self._get_param('ParameterFilters')
result = self.ssm_backend.get_parameters_by_path( result = self.ssm_backend.get_parameters_by_path(
path, with_decryption, recursive path, with_decryption, recursive, filters
) )
response = { response = {
@ -162,9 +163,18 @@ class SimpleSystemManagerResponse(BaseResponse):
keyid = self._get_param('KeyId') keyid = self._get_param('KeyId')
overwrite = self._get_param('Overwrite', False) overwrite = self._get_param('Overwrite', False)
self.ssm_backend.put_parameter( result = self.ssm_backend.put_parameter(
name, description, value, type_, keyid, overwrite) name, description, value, type_, keyid, overwrite)
return json.dumps({})
if result is None:
error = {
'__type': 'ParameterAlreadyExists',
'message': 'Parameter {0} already exists.'.format(name)
}
return json.dumps(error), dict(status=400)
response = {'Version': result}
return json.dumps(response)
def add_tags_to_resource(self): def add_tags_to_resource(self):
resource_id = self._get_param('ResourceId') resource_id = self._get_param('ResourceId')
@ -190,3 +200,8 @@ class SimpleSystemManagerResponse(BaseResponse):
tag_list = [{'Key': k, 'Value': v} for (k, v) in tags.items()] tag_list = [{'Key': k, 'Value': v} for (k, v) in tags.items()]
response = {'TagList': tag_list} response = {'TagList': tag_list}
return json.dumps(response) return json.dumps(response)
def send_command(self):
return json.dumps(
self.ssm_backend.send_command(**self.request_params)
)

View File

@ -21,7 +21,7 @@ from .history_event import HistoryEvent # flake8: noqa
from .timeout import Timeout # flake8: noqa from .timeout import Timeout # flake8: noqa
from .workflow_type import WorkflowType # flake8: noqa from .workflow_type import WorkflowType # flake8: noqa
from .workflow_execution import WorkflowExecution # flake8: noqa from .workflow_execution import WorkflowExecution # flake8: noqa
from time import sleep
KNOWN_SWF_TYPES = { KNOWN_SWF_TYPES = {
"activity": ActivityType, "activity": ActivityType,
@ -198,6 +198,9 @@ class SWFBackend(BaseBackend):
wfe.start_decision_task(task.task_token, identity=identity) wfe.start_decision_task(task.task_token, identity=identity)
return task return task
else: else:
# Sleeping here will prevent clients that rely on the timeout from
# entering in a busy waiting loop.
sleep(1)
return None return None
def count_pending_decision_tasks(self, domain_name, task_list): def count_pending_decision_tasks(self, domain_name, task_list):
@ -293,6 +296,9 @@ class SWFBackend(BaseBackend):
wfe.start_activity_task(task.task_token, identity=identity) wfe.start_activity_task(task.task_token, identity=identity)
return task return task
else: else:
# Sleeping here will prevent clients that rely on the timeout from
# entering in a busy waiting loop.
sleep(1)
return None return None
def count_pending_activity_tasks(self, domain_name, task_list): def count_pending_activity_tasks(self, domain_name, task_list):
@ -379,6 +385,14 @@ class SWFBackend(BaseBackend):
if details: if details:
activity_task.details = details activity_task.details = details
def signal_workflow_execution(self, domain_name, signal_name, workflow_id, input=None, run_id=None):
# process timeouts on all objects
self._process_timeouts()
domain = self._get_domain(domain_name)
wfe = domain.get_workflow_execution(
workflow_id, run_id=run_id, raise_if_closed=True)
wfe.signal(signal_name, input)
swf_backends = {} swf_backends = {}
for region in boto.swf.regions(): for region in boto.swf.regions():

View File

@ -25,6 +25,7 @@ SUPPORTED_HISTORY_EVENT_TYPES = (
"ActivityTaskTimedOut", "ActivityTaskTimedOut",
"DecisionTaskTimedOut", "DecisionTaskTimedOut",
"WorkflowExecutionTimedOut", "WorkflowExecutionTimedOut",
"WorkflowExecutionSignaled"
) )

View File

@ -599,6 +599,14 @@ class WorkflowExecution(BaseModel):
self.close_status = "TERMINATED" self.close_status = "TERMINATED"
self.close_cause = "OPERATOR_INITIATED" self.close_cause = "OPERATOR_INITIATED"
def signal(self, signal_name, input):
self._add_event(
"WorkflowExecutionSignaled",
signal_name=signal_name,
input=input,
)
self.schedule_decision_task()
def first_timeout(self): def first_timeout(self):
if not self.open or not self.start_timestamp: if not self.open or not self.start_timestamp:
return None return None

View File

@ -326,9 +326,9 @@ class SWFResponse(BaseResponse):
_workflow_type = self._params["workflowType"] _workflow_type = self._params["workflowType"]
workflow_name = _workflow_type["name"] workflow_name = _workflow_type["name"]
workflow_version = _workflow_type["version"] workflow_version = _workflow_type["version"]
_default_task_list = self._params.get("defaultTaskList") _task_list = self._params.get("taskList")
if _default_task_list: if _task_list:
task_list = _default_task_list.get("name") task_list = _task_list.get("name")
else: else:
task_list = None task_list = None
child_policy = self._params.get("childPolicy") child_policy = self._params.get("childPolicy")
@ -507,3 +507,20 @@ class SWFResponse(BaseResponse):
) )
# TODO: make it dynamic when we implement activity tasks cancellation # TODO: make it dynamic when we implement activity tasks cancellation
return json.dumps({"cancelRequested": False}) return json.dumps({"cancelRequested": False})
def signal_workflow_execution(self):
domain_name = self._params["domain"]
signal_name = self._params["signalName"]
workflow_id = self._params["workflowId"]
_input = self._params["input"]
run_id = self._params["runId"]
self._check_string(domain_name)
self._check_string(signal_name)
self._check_string(workflow_id)
self._check_none_or_string(_input)
self._check_none_or_string(run_id)
self.swf_backend.signal_workflow_execution(
domain_name, signal_name, workflow_id, _input, run_id)
return ""

View File

@ -51,7 +51,7 @@ def mock_xray_client(f):
aws_xray_sdk.core.xray_recorder._emitter = MockEmitter() aws_xray_sdk.core.xray_recorder._emitter = MockEmitter()
try: try:
f(*args, **kwargs) return f(*args, **kwargs)
finally: finally:
if old_xray_context_var is None: if old_xray_context_var is None:

View File

@ -8,7 +8,7 @@ freezegun
flask flask
boto>=2.45.0 boto>=2.45.0
boto3>=1.4.4 boto3>=1.4.4
botocore>=1.5.77 botocore>=1.8.36
six>=1.9 six>=1.9
prompt-toolkit==1.0.14 prompt-toolkit==1.0.14
click==6.7 click==6.7

Some files were not shown because too many files have changed in this diff Show More