commit
2190eca96a
@ -57,3 +57,4 @@ Moto is written by Steve Pulec with contributions from:
|
|||||||
* [Bendeguz Acs](https://github.com/acsbendi)
|
* [Bendeguz Acs](https://github.com/acsbendi)
|
||||||
* [Craig Anderson](https://github.com/craiga)
|
* [Craig Anderson](https://github.com/craiga)
|
||||||
* [Robert Lewis](https://github.com/ralewis85)
|
* [Robert Lewis](https://github.com/ralewis85)
|
||||||
|
* [Kyle Jones](https://github.com/Kerl1310)
|
||||||
|
107
CONFIG_README.md
Normal file
107
CONFIG_README.md
Normal file
@ -0,0 +1,107 @@
|
|||||||
|
# AWS Config Querying Support in Moto
|
||||||
|
|
||||||
|
An experimental feature for AWS Config has been developed to provide AWS Config capabilities in your unit tests.
|
||||||
|
This feature is experimental as there are many services that are not yet supported and will require the community to add them in
|
||||||
|
over time. This page details how the feature works and how you can use it.
|
||||||
|
|
||||||
|
## What is this and why would I use this?
|
||||||
|
|
||||||
|
AWS Config is an AWS service that describes your AWS resource types and can track their changes over time. At this time, moto does not
|
||||||
|
have support for handling the configuration history changes, but it does have a few methods mocked out that can be immensely useful
|
||||||
|
for unit testing.
|
||||||
|
|
||||||
|
If you are developing automation that needs to pull against AWS Config, then this will help you write tests that can simulate your
|
||||||
|
code in production.
|
||||||
|
|
||||||
|
## How does this work?
|
||||||
|
|
||||||
|
The AWS Config capabilities in moto work by examining the state of resources that are created within moto, and then returning that data
|
||||||
|
in the way that AWS Config would return it (sans history). This will work by querying all of the moto backends (regions) for a given
|
||||||
|
resource type.
|
||||||
|
|
||||||
|
However, this will only work on resource types that have this enabled.
|
||||||
|
|
||||||
|
### Current enabled resource types:
|
||||||
|
|
||||||
|
1. S3
|
||||||
|
|
||||||
|
|
||||||
|
## Developer Guide
|
||||||
|
|
||||||
|
There are several pieces to this for adding new capabilities to moto:
|
||||||
|
|
||||||
|
1. Listing resources
|
||||||
|
1. Describing resources
|
||||||
|
|
||||||
|
For both, there are a number of pre-requisites:
|
||||||
|
|
||||||
|
### Base Components
|
||||||
|
|
||||||
|
In the `moto/core/models.py` file is a class named `ConfigQueryModel`. This is a base class that keeps track of all the
|
||||||
|
resource type backends.
|
||||||
|
|
||||||
|
At a minimum, resource types that have this enabled will have:
|
||||||
|
|
||||||
|
1. A `config.py` file that will import the resource type backends (from the `__init__.py`)
|
||||||
|
1. In the resource's `config.py`, an implementation of the `ConfigQueryModel` class with logic unique to the resource type
|
||||||
|
1. An instantiation of the `ConfigQueryModel`
|
||||||
|
1. In the `moto/config/models.py` file, import the `ConfigQueryModel` instantiation, and update `RESOURCE_MAP` to have a mapping of the AWS Config resource type
|
||||||
|
to the instantiation on the previous step (just imported).
|
||||||
|
|
||||||
|
An example of the above is implemented for S3. You can see that by looking at:
|
||||||
|
|
||||||
|
1. `moto/s3/config.py`
|
||||||
|
1. `moto/config/models.py`
|
||||||
|
|
||||||
|
As well as the corresponding unit tests in:
|
||||||
|
|
||||||
|
1. `tests/s3/test_s3.py`
|
||||||
|
1. `tests/config/test_config.py`
|
||||||
|
|
||||||
|
Note for unit testing, you will want to add a test to ensure that you can query all the resources effectively. For testing this feature,
|
||||||
|
the unit tests for the `ConfigQueryModel` will not make use of `boto` to create resources, such as S3 buckets. You will need to use the
|
||||||
|
backend model methods to provision the resources. This is to make tests compatible with the moto server. You should absolutely make tests
|
||||||
|
in the resource type to test listing and object fetching.
|
||||||
|
|
||||||
|
### Listing
|
||||||
|
S3 is currently the model implementation, but it also odd in that S3 is a global resource type with regional resource residency.
|
||||||
|
|
||||||
|
But for most resource types the following is true:
|
||||||
|
|
||||||
|
1. There are regional backends with their own sets of data
|
||||||
|
1. Config aggregation can pull data from any backend region -- we assume that everything lives in the same account
|
||||||
|
|
||||||
|
Implementing the listing capability will be different for each resource type. At a minimum, you will need to return a `List` of `Dict`s
|
||||||
|
that look like this:
|
||||||
|
|
||||||
|
```python
|
||||||
|
[
|
||||||
|
{
|
||||||
|
'type': 'AWS::The AWS Config data type',
|
||||||
|
'name': 'The name of the resource',
|
||||||
|
'id': 'The ID of the resource',
|
||||||
|
'region': 'The region of the resource -- if global, then you may want to have the calling logic pass in the
|
||||||
|
aggregator region in for the resource region -- or just us-east-1 :P'
|
||||||
|
}
|
||||||
|
, ...
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
It's recommended to read the comment for the `ConfigQueryModel` [base class here](moto/core/models.py).
|
||||||
|
|
||||||
|
^^ The AWS Config code will see this and format it correct for both aggregated and non-aggregated calls.
|
||||||
|
|
||||||
|
#### General implementation tips
|
||||||
|
The aggregation and non-aggregation querying can and should just use the same overall logic. The differences are:
|
||||||
|
|
||||||
|
1. Non-aggregated listing will specify the region-name of the resource backend `backend_region`
|
||||||
|
1. Aggregated listing will need to be able to list resource types across ALL backends and filter optionally by passing in `resource_region`.
|
||||||
|
|
||||||
|
An example of a working implementation of this is [S3](moto/s3/config.py).
|
||||||
|
|
||||||
|
Pagination should generally be able to pull out the resource across any region so should be sharded by `region-item-name` -- not done for S3
|
||||||
|
because S3 has a globally unique name space.
|
||||||
|
|
||||||
|
|
||||||
|
### Describing Resources
|
||||||
|
TODO: Need to fill this in when it's implemented
|
@ -2343,7 +2343,7 @@
|
|||||||
- [ ] upload_layer_part
|
- [ ] upload_layer_part
|
||||||
|
|
||||||
## ecs
|
## ecs
|
||||||
63% implemented
|
49% implemented
|
||||||
- [X] create_cluster
|
- [X] create_cluster
|
||||||
- [X] create_service
|
- [X] create_service
|
||||||
- [ ] create_task_set
|
- [ ] create_task_set
|
||||||
@ -2381,8 +2381,8 @@
|
|||||||
- [ ] submit_attachment_state_changes
|
- [ ] submit_attachment_state_changes
|
||||||
- [ ] submit_container_state_change
|
- [ ] submit_container_state_change
|
||||||
- [ ] submit_task_state_change
|
- [ ] submit_task_state_change
|
||||||
- [ ] tag_resource
|
- [x] tag_resource
|
||||||
- [ ] untag_resource
|
- [x] untag_resource
|
||||||
- [ ] update_container_agent
|
- [ ] update_container_agent
|
||||||
- [X] update_container_instances_state
|
- [X] update_container_instances_state
|
||||||
- [X] update_service
|
- [X] update_service
|
||||||
@ -4080,7 +4080,7 @@
|
|||||||
- [ ] get_log_group_fields
|
- [ ] get_log_group_fields
|
||||||
- [ ] get_log_record
|
- [ ] get_log_record
|
||||||
- [ ] get_query_results
|
- [ ] get_query_results
|
||||||
- [ ] list_tags_log_group
|
- [X] list_tags_log_group
|
||||||
- [ ] put_destination
|
- [ ] put_destination
|
||||||
- [ ] put_destination_policy
|
- [ ] put_destination_policy
|
||||||
- [X] put_log_events
|
- [X] put_log_events
|
||||||
@ -4090,9 +4090,9 @@
|
|||||||
- [ ] put_subscription_filter
|
- [ ] put_subscription_filter
|
||||||
- [ ] start_query
|
- [ ] start_query
|
||||||
- [ ] stop_query
|
- [ ] stop_query
|
||||||
- [ ] tag_log_group
|
- [X] tag_log_group
|
||||||
- [ ] test_metric_filter
|
- [ ] test_metric_filter
|
||||||
- [ ] untag_log_group
|
- [X] untag_log_group
|
||||||
|
|
||||||
## machinelearning
|
## machinelearning
|
||||||
0% implemented
|
0% implemented
|
||||||
@ -5696,7 +5696,7 @@
|
|||||||
- [ ] update_service
|
- [ ] update_service
|
||||||
|
|
||||||
## ses
|
## ses
|
||||||
12% implemented
|
14% implemented
|
||||||
- [ ] clone_receipt_rule_set
|
- [ ] clone_receipt_rule_set
|
||||||
- [ ] create_configuration_set
|
- [ ] create_configuration_set
|
||||||
- [ ] create_configuration_set_event_destination
|
- [ ] create_configuration_set_event_destination
|
||||||
@ -5747,7 +5747,7 @@
|
|||||||
- [ ] send_custom_verification_email
|
- [ ] send_custom_verification_email
|
||||||
- [X] send_email
|
- [X] send_email
|
||||||
- [X] send_raw_email
|
- [X] send_raw_email
|
||||||
- [ ] send_templated_email
|
- [X] send_templated_email
|
||||||
- [ ] set_active_receipt_rule_set
|
- [ ] set_active_receipt_rule_set
|
||||||
- [ ] set_identity_dkim_enabled
|
- [ ] set_identity_dkim_enabled
|
||||||
- [ ] set_identity_feedback_forwarding_enabled
|
- [ ] set_identity_feedback_forwarding_enabled
|
||||||
|
@ -297,6 +297,9 @@ def test_describe_instances_allowed():
|
|||||||
|
|
||||||
See [the related test suite](https://github.com/spulec/moto/blob/master/tests/test_core/test_auth.py) for more examples.
|
See [the related test suite](https://github.com/spulec/moto/blob/master/tests/test_core/test_auth.py) for more examples.
|
||||||
|
|
||||||
|
## Experimental: AWS Config Querying
|
||||||
|
For details about the experimental AWS Config support please see the [AWS Config readme here](CONFIG_README.md).
|
||||||
|
|
||||||
## Very Important -- Recommended Usage
|
## Very Important -- Recommended Usage
|
||||||
There are some important caveats to be aware of when using moto:
|
There are some important caveats to be aware of when using moto:
|
||||||
|
|
||||||
|
@ -230,3 +230,27 @@ class TooManyTags(JsonRESTError):
|
|||||||
super(TooManyTags, self).__init__(
|
super(TooManyTags, self).__init__(
|
||||||
'ValidationException', "1 validation error detected: Value '{}' at '{}' failed to satisfy "
|
'ValidationException', "1 validation error detected: Value '{}' at '{}' failed to satisfy "
|
||||||
"constraint: Member must have length less than or equal to 50.".format(tags, param))
|
"constraint: Member must have length less than or equal to 50.".format(tags, param))
|
||||||
|
|
||||||
|
|
||||||
|
class InvalidResourceParameters(JsonRESTError):
|
||||||
|
code = 400
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
super(InvalidResourceParameters, self).__init__('ValidationException', 'Both Resource ID and Resource Name '
|
||||||
|
'cannot be specified in the request')
|
||||||
|
|
||||||
|
|
||||||
|
class InvalidLimit(JsonRESTError):
|
||||||
|
code = 400
|
||||||
|
|
||||||
|
def __init__(self, value):
|
||||||
|
super(InvalidLimit, self).__init__('ValidationException', 'Value \'{value}\' at \'limit\' failed to satisify constraint: Member'
|
||||||
|
' must have value less than or equal to 100'.format(value=value))
|
||||||
|
|
||||||
|
|
||||||
|
class TooManyResourceIds(JsonRESTError):
|
||||||
|
code = 400
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
super(TooManyResourceIds, self).__init__('ValidationException', "The specified list had more than 20 resource ID's. "
|
||||||
|
"It must have '20' or less items")
|
||||||
|
@ -17,11 +17,12 @@ from moto.config.exceptions import InvalidResourceTypeException, InvalidDelivery
|
|||||||
InvalidSNSTopicARNException, MaxNumberOfDeliveryChannelsExceededException, NoAvailableDeliveryChannelException, \
|
InvalidSNSTopicARNException, MaxNumberOfDeliveryChannelsExceededException, NoAvailableDeliveryChannelException, \
|
||||||
NoSuchDeliveryChannelException, LastDeliveryChannelDeleteFailedException, TagKeyTooBig, \
|
NoSuchDeliveryChannelException, LastDeliveryChannelDeleteFailedException, TagKeyTooBig, \
|
||||||
TooManyTags, TagValueTooBig, TooManyAccountSources, InvalidParameterValueException, InvalidNextTokenException, \
|
TooManyTags, TagValueTooBig, TooManyAccountSources, InvalidParameterValueException, InvalidNextTokenException, \
|
||||||
NoSuchConfigurationAggregatorException, InvalidTagCharacters, DuplicateTags
|
NoSuchConfigurationAggregatorException, InvalidTagCharacters, DuplicateTags, InvalidLimit, InvalidResourceParameters, TooManyResourceIds
|
||||||
|
|
||||||
from moto.core import BaseBackend, BaseModel
|
from moto.core import BaseBackend, BaseModel
|
||||||
|
from moto.s3.config import s3_config_query
|
||||||
|
|
||||||
DEFAULT_ACCOUNT_ID = 123456789012
|
DEFAULT_ACCOUNT_ID = '123456789012'
|
||||||
POP_STRINGS = [
|
POP_STRINGS = [
|
||||||
'capitalizeStart',
|
'capitalizeStart',
|
||||||
'CapitalizeStart',
|
'CapitalizeStart',
|
||||||
@ -32,6 +33,11 @@ POP_STRINGS = [
|
|||||||
]
|
]
|
||||||
DEFAULT_PAGE_SIZE = 100
|
DEFAULT_PAGE_SIZE = 100
|
||||||
|
|
||||||
|
# Map the Config resource type to a backend:
|
||||||
|
RESOURCE_MAP = {
|
||||||
|
'AWS::S3::Bucket': s3_config_query
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
def datetime2int(date):
|
def datetime2int(date):
|
||||||
return int(time.mktime(date.timetuple()))
|
return int(time.mktime(date.timetuple()))
|
||||||
@ -680,6 +686,110 @@ class ConfigBackend(BaseBackend):
|
|||||||
|
|
||||||
del self.delivery_channels[channel_name]
|
del self.delivery_channels[channel_name]
|
||||||
|
|
||||||
|
def list_discovered_resources(self, resource_type, backend_region, resource_ids, resource_name, limit, next_token):
|
||||||
|
"""This will query against the mocked AWS Config listing function that must exist for the resource backend.
|
||||||
|
|
||||||
|
:param resource_type:
|
||||||
|
:param backend_region:
|
||||||
|
:param ids:
|
||||||
|
:param name:
|
||||||
|
:param limit:
|
||||||
|
:param next_token:
|
||||||
|
:return:
|
||||||
|
"""
|
||||||
|
identifiers = []
|
||||||
|
new_token = None
|
||||||
|
|
||||||
|
limit = limit or DEFAULT_PAGE_SIZE
|
||||||
|
if limit > DEFAULT_PAGE_SIZE:
|
||||||
|
raise InvalidLimit(limit)
|
||||||
|
|
||||||
|
if resource_ids and resource_name:
|
||||||
|
raise InvalidResourceParameters()
|
||||||
|
|
||||||
|
# Only 20 maximum Resource IDs:
|
||||||
|
if resource_ids and len(resource_ids) > 20:
|
||||||
|
raise TooManyResourceIds()
|
||||||
|
|
||||||
|
# If the resource type exists and the backend region is implemented in moto, then
|
||||||
|
# call upon the resource type's Config Query class to retrieve the list of resources that match the criteria:
|
||||||
|
if RESOURCE_MAP.get(resource_type, {}):
|
||||||
|
# Is this a global resource type? -- if so, re-write the region to 'global':
|
||||||
|
if RESOURCE_MAP[resource_type].backends.get('global'):
|
||||||
|
backend_region = 'global'
|
||||||
|
|
||||||
|
# For non-aggregated queries, the we only care about the backend_region. Need to verify that moto has implemented
|
||||||
|
# the region for the given backend:
|
||||||
|
if RESOURCE_MAP[resource_type].backends.get(backend_region):
|
||||||
|
# Fetch the resources for the backend's region:
|
||||||
|
identifiers, new_token = \
|
||||||
|
RESOURCE_MAP[resource_type].list_config_service_resources(resource_ids, resource_name, limit, next_token)
|
||||||
|
|
||||||
|
result = {'resourceIdentifiers': [
|
||||||
|
{
|
||||||
|
'resourceType': identifier['type'],
|
||||||
|
'resourceId': identifier['id'],
|
||||||
|
'resourceName': identifier['name']
|
||||||
|
}
|
||||||
|
for identifier in identifiers]
|
||||||
|
}
|
||||||
|
|
||||||
|
if new_token:
|
||||||
|
result['nextToken'] = new_token
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
def list_aggregate_discovered_resources(self, aggregator_name, resource_type, filters, limit, next_token):
|
||||||
|
"""This will query against the mocked AWS Config listing function that must exist for the resource backend.
|
||||||
|
As far a moto goes -- the only real difference between this function and the `list_discovered_resources` function is that
|
||||||
|
this will require a Config Aggregator be set up a priori and can search based on resource regions.
|
||||||
|
|
||||||
|
:param aggregator_name:
|
||||||
|
:param resource_type:
|
||||||
|
:param filters:
|
||||||
|
:param limit:
|
||||||
|
:param next_token:
|
||||||
|
:return:
|
||||||
|
"""
|
||||||
|
if not self.config_aggregators.get(aggregator_name):
|
||||||
|
raise NoSuchConfigurationAggregatorException()
|
||||||
|
|
||||||
|
identifiers = []
|
||||||
|
new_token = None
|
||||||
|
filters = filters or {}
|
||||||
|
|
||||||
|
limit = limit or DEFAULT_PAGE_SIZE
|
||||||
|
if limit > DEFAULT_PAGE_SIZE:
|
||||||
|
raise InvalidLimit(limit)
|
||||||
|
|
||||||
|
# If the resource type exists and the backend region is implemented in moto, then
|
||||||
|
# call upon the resource type's Config Query class to retrieve the list of resources that match the criteria:
|
||||||
|
if RESOURCE_MAP.get(resource_type, {}):
|
||||||
|
# We only care about a filter's Region, Resource Name, and Resource ID:
|
||||||
|
resource_region = filters.get('Region')
|
||||||
|
resource_id = [filters['ResourceId']] if filters.get('ResourceId') else None
|
||||||
|
resource_name = filters.get('ResourceName')
|
||||||
|
|
||||||
|
identifiers, new_token = \
|
||||||
|
RESOURCE_MAP[resource_type].list_config_service_resources(resource_id, resource_name, limit, next_token,
|
||||||
|
resource_region=resource_region)
|
||||||
|
|
||||||
|
result = {'ResourceIdentifiers': [
|
||||||
|
{
|
||||||
|
'SourceAccountId': DEFAULT_ACCOUNT_ID,
|
||||||
|
'SourceRegion': identifier['region'],
|
||||||
|
'ResourceType': identifier['type'],
|
||||||
|
'ResourceId': identifier['id'],
|
||||||
|
'ResourceName': identifier['name']
|
||||||
|
}
|
||||||
|
for identifier in identifiers]
|
||||||
|
}
|
||||||
|
|
||||||
|
if new_token:
|
||||||
|
result['NextToken'] = new_token
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
config_backends = {}
|
config_backends = {}
|
||||||
boto3_session = Session()
|
boto3_session = Session()
|
||||||
|
@ -84,3 +84,34 @@ class ConfigResponse(BaseResponse):
|
|||||||
def stop_configuration_recorder(self):
|
def stop_configuration_recorder(self):
|
||||||
self.config_backend.stop_configuration_recorder(self._get_param('ConfigurationRecorderName'))
|
self.config_backend.stop_configuration_recorder(self._get_param('ConfigurationRecorderName'))
|
||||||
return ""
|
return ""
|
||||||
|
|
||||||
|
def list_discovered_resources(self):
|
||||||
|
schema = self.config_backend.list_discovered_resources(self._get_param('resourceType'),
|
||||||
|
self.region,
|
||||||
|
self._get_param('resourceIds'),
|
||||||
|
self._get_param('resourceName'),
|
||||||
|
self._get_param('limit'),
|
||||||
|
self._get_param('nextToken'))
|
||||||
|
return json.dumps(schema)
|
||||||
|
|
||||||
|
def list_aggregate_discovered_resources(self):
|
||||||
|
schema = self.config_backend.list_aggregate_discovered_resources(self._get_param('ConfigurationAggregatorName'),
|
||||||
|
self._get_param('ResourceType'),
|
||||||
|
self._get_param('Filters'),
|
||||||
|
self._get_param('Limit'),
|
||||||
|
self._get_param('NextToken'))
|
||||||
|
return json.dumps(schema)
|
||||||
|
|
||||||
|
"""
|
||||||
|
def batch_get_resource_config(self):
|
||||||
|
# TODO implement me!
|
||||||
|
return ""
|
||||||
|
|
||||||
|
def batch_get_aggregate_resource_config(self):
|
||||||
|
# TODO implement me!
|
||||||
|
return ""
|
||||||
|
|
||||||
|
def get_resource_config_history(self):
|
||||||
|
# TODO implement me!
|
||||||
|
return ""
|
||||||
|
"""
|
||||||
|
@ -104,3 +104,11 @@ class AuthFailureError(RESTError):
|
|||||||
super(AuthFailureError, self).__init__(
|
super(AuthFailureError, self).__init__(
|
||||||
'AuthFailure',
|
'AuthFailure',
|
||||||
"AWS was not able to validate the provided access credentials")
|
"AWS was not able to validate the provided access credentials")
|
||||||
|
|
||||||
|
|
||||||
|
class InvalidNextTokenException(JsonRESTError):
|
||||||
|
"""For AWS Config resource listing. This will be used by many different resource types, and so it is in moto.core."""
|
||||||
|
code = 400
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
super(InvalidNextTokenException, self).__init__('InvalidNextTokenException', 'The nextToken provided is invalid')
|
||||||
|
@ -538,6 +538,65 @@ class BaseBackend(object):
|
|||||||
else:
|
else:
|
||||||
return HttprettyMockAWS({'global': self})
|
return HttprettyMockAWS({'global': self})
|
||||||
|
|
||||||
|
# def list_config_service_resources(self, resource_ids, resource_name, limit, next_token):
|
||||||
|
# """For AWS Config. This will list all of the resources of the given type and optional resource name and region"""
|
||||||
|
# raise NotImplementedError()
|
||||||
|
|
||||||
|
|
||||||
|
class ConfigQueryModel(object):
|
||||||
|
|
||||||
|
def __init__(self, backends):
|
||||||
|
"""Inits based on the resource type's backends (1 for each region if applicable)"""
|
||||||
|
self.backends = backends
|
||||||
|
|
||||||
|
def list_config_service_resources(self, resource_ids, resource_name, limit, next_token, backend_region=None, resource_region=None):
|
||||||
|
"""For AWS Config. This will list all of the resources of the given type and optional resource name and region.
|
||||||
|
|
||||||
|
This supports both aggregated and non-aggregated listing. The following notes the difference:
|
||||||
|
|
||||||
|
- Non Aggregated Listing -
|
||||||
|
This only lists resources within a region. The way that this is implemented in moto is based on the region
|
||||||
|
for the resource backend.
|
||||||
|
|
||||||
|
You must set the `backend_region` to the region that the API request arrived from. resource_region can be set to `None`.
|
||||||
|
|
||||||
|
- Aggregated Listing -
|
||||||
|
This lists resources from all potential regional backends. For non-global resource types, this should collect a full
|
||||||
|
list of resources from all the backends, and then be able to filter from the resource region. This is because an
|
||||||
|
aggregator can aggregate resources from multiple regions. In moto, aggregated regions will *assume full aggregation
|
||||||
|
from all resources in all regions for a given resource type*.
|
||||||
|
|
||||||
|
The `backend_region` should be set to `None` for these queries, and the `resource_region` should optionally be set to
|
||||||
|
the `Filters` region parameter to filter out resources that reside in a specific region.
|
||||||
|
|
||||||
|
For aggregated listings, pagination logic should be set such that the next page can properly span all the region backends.
|
||||||
|
As such, the proper way to implement is to first obtain a full list of results from all the region backends, and then filter
|
||||||
|
from there. It may be valuable to make this a concatenation of the region and resource name.
|
||||||
|
|
||||||
|
:param resource_region:
|
||||||
|
:param resource_ids:
|
||||||
|
:param resource_name:
|
||||||
|
:param limit:
|
||||||
|
:param next_token:
|
||||||
|
:param backend_region: The region for the backend to pull results from. Set to `None` if this is an aggregated query.
|
||||||
|
:return: This should return a list of Dicts that have the following fields:
|
||||||
|
[
|
||||||
|
{
|
||||||
|
'type': 'AWS::The AWS Config data type',
|
||||||
|
'name': 'The name of the resource',
|
||||||
|
'id': 'The ID of the resource',
|
||||||
|
'region': 'The region of the resource -- if global, then you may want to have the calling logic pass in the
|
||||||
|
aggregator region in for the resource region -- or just us-east-1 :P'
|
||||||
|
}
|
||||||
|
, ...
|
||||||
|
]
|
||||||
|
"""
|
||||||
|
raise NotImplementedError()
|
||||||
|
|
||||||
|
def get_config_resource(self):
|
||||||
|
"""TODO implement me."""
|
||||||
|
raise NotImplementedError()
|
||||||
|
|
||||||
|
|
||||||
class base_decorator(object):
|
class base_decorator(object):
|
||||||
mock_backend = MockAWS
|
mock_backend = MockAWS
|
||||||
|
@ -1,4 +1,5 @@
|
|||||||
from __future__ import unicode_literals
|
from __future__ import unicode_literals
|
||||||
|
import itertools
|
||||||
import json
|
import json
|
||||||
import six
|
import six
|
||||||
import re
|
import re
|
||||||
@ -113,6 +114,21 @@ class DynamoHandler(BaseResponse):
|
|||||||
# getting the indexes
|
# getting the indexes
|
||||||
global_indexes = body.get("GlobalSecondaryIndexes", [])
|
global_indexes = body.get("GlobalSecondaryIndexes", [])
|
||||||
local_secondary_indexes = body.get("LocalSecondaryIndexes", [])
|
local_secondary_indexes = body.get("LocalSecondaryIndexes", [])
|
||||||
|
# Verify AttributeDefinitions list all
|
||||||
|
expected_attrs = []
|
||||||
|
expected_attrs.extend([key['AttributeName'] for key in key_schema])
|
||||||
|
expected_attrs.extend(schema['AttributeName'] for schema in itertools.chain(*list(idx['KeySchema'] for idx in local_secondary_indexes)))
|
||||||
|
expected_attrs.extend(schema['AttributeName'] for schema in itertools.chain(*list(idx['KeySchema'] for idx in global_indexes)))
|
||||||
|
expected_attrs = list(set(expected_attrs))
|
||||||
|
expected_attrs.sort()
|
||||||
|
actual_attrs = [item['AttributeName'] for item in attr]
|
||||||
|
actual_attrs.sort()
|
||||||
|
if actual_attrs != expected_attrs:
|
||||||
|
er = 'com.amazonaws.dynamodb.v20111205#ValidationException'
|
||||||
|
return self.error(er,
|
||||||
|
'One or more parameter values were invalid: '
|
||||||
|
'Some index key attributes are not defined in AttributeDefinitions. '
|
||||||
|
'Keys: ' + str(expected_attrs) + ', AttributeDefinitions: ' + str(actual_attrs))
|
||||||
# get the stream specification
|
# get the stream specification
|
||||||
streams = body.get("StreamSpecification")
|
streams = body.get("StreamSpecification")
|
||||||
|
|
||||||
|
@ -44,15 +44,17 @@ class BaseObject(BaseModel):
|
|||||||
|
|
||||||
class Cluster(BaseObject):
|
class Cluster(BaseObject):
|
||||||
|
|
||||||
def __init__(self, cluster_name):
|
def __init__(self, cluster_name, region_name):
|
||||||
self.active_services_count = 0
|
self.active_services_count = 0
|
||||||
self.arn = 'arn:aws:ecs:us-east-1:012345678910:cluster/{0}'.format(
|
self.arn = 'arn:aws:ecs:{0}:012345678910:cluster/{1}'.format(
|
||||||
|
region_name,
|
||||||
cluster_name)
|
cluster_name)
|
||||||
self.name = cluster_name
|
self.name = cluster_name
|
||||||
self.pending_tasks_count = 0
|
self.pending_tasks_count = 0
|
||||||
self.registered_container_instances_count = 0
|
self.registered_container_instances_count = 0
|
||||||
self.running_tasks_count = 0
|
self.running_tasks_count = 0
|
||||||
self.status = 'ACTIVE'
|
self.status = 'ACTIVE'
|
||||||
|
self.region_name = region_name
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def physical_resource_id(self):
|
def physical_resource_id(self):
|
||||||
@ -108,11 +110,11 @@ class Cluster(BaseObject):
|
|||||||
|
|
||||||
class TaskDefinition(BaseObject):
|
class TaskDefinition(BaseObject):
|
||||||
|
|
||||||
def __init__(self, family, revision, container_definitions, volumes=None, tags=None):
|
def __init__(self, family, revision, container_definitions, region_name, volumes=None, tags=None):
|
||||||
self.family = family
|
self.family = family
|
||||||
self.revision = revision
|
self.revision = revision
|
||||||
self.arn = 'arn:aws:ecs:us-east-1:012345678910:task-definition/{0}:{1}'.format(
|
self.arn = 'arn:aws:ecs:{0}:012345678910:task-definition/{1}:{2}'.format(
|
||||||
family, revision)
|
region_name, family, revision)
|
||||||
self.container_definitions = container_definitions
|
self.container_definitions = container_definitions
|
||||||
self.tags = tags if tags is not None else []
|
self.tags = tags if tags is not None else []
|
||||||
if volumes is None:
|
if volumes is None:
|
||||||
@ -172,7 +174,8 @@ class Task(BaseObject):
|
|||||||
def __init__(self, cluster, task_definition, container_instance_arn,
|
def __init__(self, cluster, task_definition, container_instance_arn,
|
||||||
resource_requirements, overrides={}, started_by=''):
|
resource_requirements, overrides={}, started_by=''):
|
||||||
self.cluster_arn = cluster.arn
|
self.cluster_arn = cluster.arn
|
||||||
self.task_arn = 'arn:aws:ecs:us-east-1:012345678910:task/{0}'.format(
|
self.task_arn = 'arn:aws:ecs:{0}:012345678910:task/{1}'.format(
|
||||||
|
cluster.region_name,
|
||||||
str(uuid.uuid4()))
|
str(uuid.uuid4()))
|
||||||
self.container_instance_arn = container_instance_arn
|
self.container_instance_arn = container_instance_arn
|
||||||
self.last_status = 'RUNNING'
|
self.last_status = 'RUNNING'
|
||||||
@ -192,9 +195,10 @@ class Task(BaseObject):
|
|||||||
|
|
||||||
class Service(BaseObject):
|
class Service(BaseObject):
|
||||||
|
|
||||||
def __init__(self, cluster, service_name, task_definition, desired_count, load_balancers=None, scheduling_strategy=None):
|
def __init__(self, cluster, service_name, task_definition, desired_count, load_balancers=None, scheduling_strategy=None, tags=None):
|
||||||
self.cluster_arn = cluster.arn
|
self.cluster_arn = cluster.arn
|
||||||
self.arn = 'arn:aws:ecs:us-east-1:012345678910:service/{0}'.format(
|
self.arn = 'arn:aws:ecs:{0}:012345678910:service/{1}'.format(
|
||||||
|
cluster.region_name,
|
||||||
service_name)
|
service_name)
|
||||||
self.name = service_name
|
self.name = service_name
|
||||||
self.status = 'ACTIVE'
|
self.status = 'ACTIVE'
|
||||||
@ -216,6 +220,7 @@ class Service(BaseObject):
|
|||||||
]
|
]
|
||||||
self.load_balancers = load_balancers if load_balancers is not None else []
|
self.load_balancers = load_balancers if load_balancers is not None else []
|
||||||
self.scheduling_strategy = scheduling_strategy if scheduling_strategy is not None else 'REPLICA'
|
self.scheduling_strategy = scheduling_strategy if scheduling_strategy is not None else 'REPLICA'
|
||||||
|
self.tags = tags if tags is not None else []
|
||||||
self.pending_count = 0
|
self.pending_count = 0
|
||||||
|
|
||||||
@property
|
@property
|
||||||
@ -225,7 +230,7 @@ class Service(BaseObject):
|
|||||||
@property
|
@property
|
||||||
def response_object(self):
|
def response_object(self):
|
||||||
response_object = self.gen_response_object()
|
response_object = self.gen_response_object()
|
||||||
del response_object['name'], response_object['arn']
|
del response_object['name'], response_object['arn'], response_object['tags']
|
||||||
response_object['serviceName'] = self.name
|
response_object['serviceName'] = self.name
|
||||||
response_object['serviceArn'] = self.arn
|
response_object['serviceArn'] = self.arn
|
||||||
response_object['schedulingStrategy'] = self.scheduling_strategy
|
response_object['schedulingStrategy'] = self.scheduling_strategy
|
||||||
@ -273,7 +278,7 @@ class Service(BaseObject):
|
|||||||
|
|
||||||
ecs_backend = ecs_backends[region_name]
|
ecs_backend = ecs_backends[region_name]
|
||||||
service_name = original_resource.name
|
service_name = original_resource.name
|
||||||
if original_resource.cluster_arn != Cluster(cluster_name).arn:
|
if original_resource.cluster_arn != Cluster(cluster_name, region_name).arn:
|
||||||
# TODO: LoadBalancers
|
# TODO: LoadBalancers
|
||||||
# TODO: Role
|
# TODO: Role
|
||||||
ecs_backend.delete_service(cluster_name, service_name)
|
ecs_backend.delete_service(cluster_name, service_name)
|
||||||
@ -320,7 +325,8 @@ class ContainerInstance(BaseObject):
|
|||||||
'name': 'PORTS_UDP',
|
'name': 'PORTS_UDP',
|
||||||
'stringSetValue': [],
|
'stringSetValue': [],
|
||||||
'type': 'STRINGSET'}]
|
'type': 'STRINGSET'}]
|
||||||
self.container_instance_arn = "arn:aws:ecs:us-east-1:012345678910:container-instance/{0}".format(
|
self.container_instance_arn = "arn:aws:ecs:{0}:012345678910:container-instance/{1}".format(
|
||||||
|
region_name,
|
||||||
str(uuid.uuid4()))
|
str(uuid.uuid4()))
|
||||||
self.pending_tasks_count = 0
|
self.pending_tasks_count = 0
|
||||||
self.remaining_resources = [
|
self.remaining_resources = [
|
||||||
@ -378,9 +384,10 @@ class ContainerInstance(BaseObject):
|
|||||||
|
|
||||||
|
|
||||||
class ClusterFailure(BaseObject):
|
class ClusterFailure(BaseObject):
|
||||||
def __init__(self, reason, cluster_name):
|
def __init__(self, reason, cluster_name, region_name):
|
||||||
self.reason = reason
|
self.reason = reason
|
||||||
self.arn = "arn:aws:ecs:us-east-1:012345678910:cluster/{0}".format(
|
self.arn = "arn:aws:ecs:{0}:012345678910:cluster/{1}".format(
|
||||||
|
region_name,
|
||||||
cluster_name)
|
cluster_name)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
@ -393,9 +400,10 @@ class ClusterFailure(BaseObject):
|
|||||||
|
|
||||||
class ContainerInstanceFailure(BaseObject):
|
class ContainerInstanceFailure(BaseObject):
|
||||||
|
|
||||||
def __init__(self, reason, container_instance_id):
|
def __init__(self, reason, container_instance_id, region_name):
|
||||||
self.reason = reason
|
self.reason = reason
|
||||||
self.arn = "arn:aws:ecs:us-east-1:012345678910:container-instance/{0}".format(
|
self.arn = "arn:aws:ecs:{0}:012345678910:container-instance/{1}".format(
|
||||||
|
region_name,
|
||||||
container_instance_id)
|
container_instance_id)
|
||||||
|
|
||||||
@property
|
@property
|
||||||
@ -438,7 +446,7 @@ class EC2ContainerServiceBackend(BaseBackend):
|
|||||||
"{0} is not a task_definition".format(task_definition_name))
|
"{0} is not a task_definition".format(task_definition_name))
|
||||||
|
|
||||||
def create_cluster(self, cluster_name):
|
def create_cluster(self, cluster_name):
|
||||||
cluster = Cluster(cluster_name)
|
cluster = Cluster(cluster_name, self.region_name)
|
||||||
self.clusters[cluster_name] = cluster
|
self.clusters[cluster_name] = cluster
|
||||||
return cluster
|
return cluster
|
||||||
|
|
||||||
@ -461,7 +469,7 @@ class EC2ContainerServiceBackend(BaseBackend):
|
|||||||
list_clusters.append(
|
list_clusters.append(
|
||||||
self.clusters[cluster_name].response_object)
|
self.clusters[cluster_name].response_object)
|
||||||
else:
|
else:
|
||||||
failures.append(ClusterFailure('MISSING', cluster_name))
|
failures.append(ClusterFailure('MISSING', cluster_name, self.region_name))
|
||||||
return list_clusters, failures
|
return list_clusters, failures
|
||||||
|
|
||||||
def delete_cluster(self, cluster_str):
|
def delete_cluster(self, cluster_str):
|
||||||
@ -479,7 +487,7 @@ class EC2ContainerServiceBackend(BaseBackend):
|
|||||||
self.task_definitions[family] = {}
|
self.task_definitions[family] = {}
|
||||||
revision = 1
|
revision = 1
|
||||||
task_definition = TaskDefinition(
|
task_definition = TaskDefinition(
|
||||||
family, revision, container_definitions, volumes, tags)
|
family, revision, container_definitions, self.region_name, volumes, tags)
|
||||||
self.task_definitions[family][revision] = task_definition
|
self.task_definitions[family][revision] = task_definition
|
||||||
|
|
||||||
return task_definition
|
return task_definition
|
||||||
@ -691,7 +699,7 @@ class EC2ContainerServiceBackend(BaseBackend):
|
|||||||
raise Exception("Could not find task {} on cluster {}".format(
|
raise Exception("Could not find task {} on cluster {}".format(
|
||||||
task_str, cluster_name))
|
task_str, cluster_name))
|
||||||
|
|
||||||
def create_service(self, cluster_str, service_name, task_definition_str, desired_count, load_balancers=None, scheduling_strategy=None):
|
def create_service(self, cluster_str, service_name, task_definition_str, desired_count, load_balancers=None, scheduling_strategy=None, tags=None):
|
||||||
cluster_name = cluster_str.split('/')[-1]
|
cluster_name = cluster_str.split('/')[-1]
|
||||||
if cluster_name in self.clusters:
|
if cluster_name in self.clusters:
|
||||||
cluster = self.clusters[cluster_name]
|
cluster = self.clusters[cluster_name]
|
||||||
@ -701,7 +709,7 @@ class EC2ContainerServiceBackend(BaseBackend):
|
|||||||
desired_count = desired_count if desired_count is not None else 0
|
desired_count = desired_count if desired_count is not None else 0
|
||||||
|
|
||||||
service = Service(cluster, service_name,
|
service = Service(cluster, service_name,
|
||||||
task_definition, desired_count, load_balancers, scheduling_strategy)
|
task_definition, desired_count, load_balancers, scheduling_strategy, tags)
|
||||||
cluster_service_pair = '{0}:{1}'.format(cluster_name, service_name)
|
cluster_service_pair = '{0}:{1}'.format(cluster_name, service_name)
|
||||||
self.services[cluster_service_pair] = service
|
self.services[cluster_service_pair] = service
|
||||||
|
|
||||||
@ -792,7 +800,7 @@ class EC2ContainerServiceBackend(BaseBackend):
|
|||||||
container_instance_objects.append(container_instance)
|
container_instance_objects.append(container_instance)
|
||||||
else:
|
else:
|
||||||
failures.append(ContainerInstanceFailure(
|
failures.append(ContainerInstanceFailure(
|
||||||
'MISSING', container_instance_id))
|
'MISSING', container_instance_id, self.region_name))
|
||||||
|
|
||||||
return container_instance_objects, failures
|
return container_instance_objects, failures
|
||||||
|
|
||||||
@ -814,7 +822,7 @@ class EC2ContainerServiceBackend(BaseBackend):
|
|||||||
container_instance.status = status
|
container_instance.status = status
|
||||||
container_instance_objects.append(container_instance)
|
container_instance_objects.append(container_instance)
|
||||||
else:
|
else:
|
||||||
failures.append(ContainerInstanceFailure('MISSING', container_instance_id))
|
failures.append(ContainerInstanceFailure('MISSING', container_instance_id, self.region_name))
|
||||||
|
|
||||||
return container_instance_objects, failures
|
return container_instance_objects, failures
|
||||||
|
|
||||||
@ -958,22 +966,31 @@ class EC2ContainerServiceBackend(BaseBackend):
|
|||||||
|
|
||||||
yield task_fam
|
yield task_fam
|
||||||
|
|
||||||
def list_tags_for_resource(self, resource_arn):
|
@staticmethod
|
||||||
"""Currently only implemented for task definitions"""
|
def _parse_resource_arn(resource_arn):
|
||||||
match = re.match(
|
match = re.match(
|
||||||
"^arn:aws:ecs:(?P<region>[^:]+):(?P<account_id>[^:]+):(?P<service>[^:]+)/(?P<id>.*)$",
|
"^arn:aws:ecs:(?P<region>[^:]+):(?P<account_id>[^:]+):(?P<service>[^:]+)/(?P<id>.*)$",
|
||||||
resource_arn)
|
resource_arn)
|
||||||
if not match:
|
if not match:
|
||||||
raise JsonRESTError('InvalidParameterException', 'The ARN provided is invalid.')
|
raise JsonRESTError('InvalidParameterException', 'The ARN provided is invalid.')
|
||||||
|
return match.groupdict()
|
||||||
|
|
||||||
service = match.group("service")
|
def list_tags_for_resource(self, resource_arn):
|
||||||
if service == "task-definition":
|
"""Currently implemented only for task definitions and services"""
|
||||||
|
parsed_arn = self._parse_resource_arn(resource_arn)
|
||||||
|
if parsed_arn["service"] == "task-definition":
|
||||||
for task_definition in self.task_definitions.values():
|
for task_definition in self.task_definitions.values():
|
||||||
for revision in task_definition.values():
|
for revision in task_definition.values():
|
||||||
if revision.arn == resource_arn:
|
if revision.arn == resource_arn:
|
||||||
return revision.tags
|
return revision.tags
|
||||||
else:
|
else:
|
||||||
raise TaskDefinitionNotFoundException()
|
raise TaskDefinitionNotFoundException()
|
||||||
|
elif parsed_arn["service"] == "service":
|
||||||
|
for service in self.services.values():
|
||||||
|
if service.arn == resource_arn:
|
||||||
|
return service.tags
|
||||||
|
else:
|
||||||
|
raise ServiceNotFoundException(service_name=parsed_arn["id"])
|
||||||
raise NotImplementedError()
|
raise NotImplementedError()
|
||||||
|
|
||||||
def _get_last_task_definition_revision_id(self, family):
|
def _get_last_task_definition_revision_id(self, family):
|
||||||
@ -981,6 +998,42 @@ class EC2ContainerServiceBackend(BaseBackend):
|
|||||||
if definitions:
|
if definitions:
|
||||||
return max(definitions.keys())
|
return max(definitions.keys())
|
||||||
|
|
||||||
|
def tag_resource(self, resource_arn, tags):
|
||||||
|
"""Currently implemented only for services"""
|
||||||
|
parsed_arn = self._parse_resource_arn(resource_arn)
|
||||||
|
if parsed_arn["service"] == "service":
|
||||||
|
for service in self.services.values():
|
||||||
|
if service.arn == resource_arn:
|
||||||
|
service.tags = self._merge_tags(service.tags, tags)
|
||||||
|
return {}
|
||||||
|
else:
|
||||||
|
raise ServiceNotFoundException(service_name=parsed_arn["id"])
|
||||||
|
raise NotImplementedError()
|
||||||
|
|
||||||
|
def _merge_tags(self, existing_tags, new_tags):
|
||||||
|
merged_tags = new_tags
|
||||||
|
new_keys = self._get_keys(new_tags)
|
||||||
|
for existing_tag in existing_tags:
|
||||||
|
if existing_tag["key"] not in new_keys:
|
||||||
|
merged_tags.append(existing_tag)
|
||||||
|
return merged_tags
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def _get_keys(tags):
|
||||||
|
return [tag['key'] for tag in tags]
|
||||||
|
|
||||||
|
def untag_resource(self, resource_arn, tag_keys):
|
||||||
|
"""Currently implemented only for services"""
|
||||||
|
parsed_arn = self._parse_resource_arn(resource_arn)
|
||||||
|
if parsed_arn["service"] == "service":
|
||||||
|
for service in self.services.values():
|
||||||
|
if service.arn == resource_arn:
|
||||||
|
service.tags = [tag for tag in service.tags if tag["key"] not in tag_keys]
|
||||||
|
return {}
|
||||||
|
else:
|
||||||
|
raise ServiceNotFoundException(service_name=parsed_arn["id"])
|
||||||
|
raise NotImplementedError()
|
||||||
|
|
||||||
|
|
||||||
available_regions = boto3.session.Session().get_available_regions("ecs")
|
available_regions = boto3.session.Session().get_available_regions("ecs")
|
||||||
ecs_backends = {region: EC2ContainerServiceBackend(region) for region in available_regions}
|
ecs_backends = {region: EC2ContainerServiceBackend(region) for region in available_regions}
|
||||||
|
@ -156,8 +156,9 @@ class EC2ContainerServiceResponse(BaseResponse):
|
|||||||
desired_count = self._get_int_param('desiredCount')
|
desired_count = self._get_int_param('desiredCount')
|
||||||
load_balancers = self._get_param('loadBalancers')
|
load_balancers = self._get_param('loadBalancers')
|
||||||
scheduling_strategy = self._get_param('schedulingStrategy')
|
scheduling_strategy = self._get_param('schedulingStrategy')
|
||||||
|
tags = self._get_param('tags')
|
||||||
service = self.ecs_backend.create_service(
|
service = self.ecs_backend.create_service(
|
||||||
cluster_str, service_name, task_definition_str, desired_count, load_balancers, scheduling_strategy)
|
cluster_str, service_name, task_definition_str, desired_count, load_balancers, scheduling_strategy, tags)
|
||||||
return json.dumps({
|
return json.dumps({
|
||||||
'service': service.response_object
|
'service': service.response_object
|
||||||
})
|
})
|
||||||
@ -319,3 +320,15 @@ class EC2ContainerServiceResponse(BaseResponse):
|
|||||||
resource_arn = self._get_param('resourceArn')
|
resource_arn = self._get_param('resourceArn')
|
||||||
tags = self.ecs_backend.list_tags_for_resource(resource_arn)
|
tags = self.ecs_backend.list_tags_for_resource(resource_arn)
|
||||||
return json.dumps({'tags': tags})
|
return json.dumps({'tags': tags})
|
||||||
|
|
||||||
|
def tag_resource(self):
|
||||||
|
resource_arn = self._get_param('resourceArn')
|
||||||
|
tags = self._get_param('tags')
|
||||||
|
results = self.ecs_backend.tag_resource(resource_arn, tags)
|
||||||
|
return json.dumps(results)
|
||||||
|
|
||||||
|
def untag_resource(self):
|
||||||
|
resource_arn = self._get_param('resourceArn')
|
||||||
|
tag_keys = self._get_param('tagKeys')
|
||||||
|
results = self.ecs_backend.untag_resource(resource_arn, tag_keys)
|
||||||
|
return json.dumps(results)
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
from __future__ import unicode_literals
|
from __future__ import unicode_literals
|
||||||
|
|
||||||
from .models import events_backend
|
from .models import events_backends
|
||||||
|
from ..core.models import base_decorator
|
||||||
|
|
||||||
events_backends = {"global": events_backend}
|
events_backend = events_backends['us-east-1']
|
||||||
mock_events = events_backend.decorator
|
mock_events = base_decorator(events_backends)
|
||||||
|
@ -1,6 +1,7 @@
|
|||||||
import os
|
import os
|
||||||
import re
|
import re
|
||||||
import json
|
import json
|
||||||
|
import boto3
|
||||||
|
|
||||||
from moto.core.exceptions import JsonRESTError
|
from moto.core.exceptions import JsonRESTError
|
||||||
from moto.core import BaseBackend, BaseModel
|
from moto.core import BaseBackend, BaseModel
|
||||||
@ -9,10 +10,14 @@ from moto.core import BaseBackend, BaseModel
|
|||||||
class Rule(BaseModel):
|
class Rule(BaseModel):
|
||||||
|
|
||||||
def _generate_arn(self, name):
|
def _generate_arn(self, name):
|
||||||
return 'arn:aws:events:us-west-2:111111111111:rule/' + name
|
return 'arn:aws:events:{region_name}:111111111111:rule/{name}'.format(
|
||||||
|
region_name=self.region_name,
|
||||||
|
name=name
|
||||||
|
)
|
||||||
|
|
||||||
def __init__(self, name, **kwargs):
|
def __init__(self, name, region_name, **kwargs):
|
||||||
self.name = name
|
self.name = name
|
||||||
|
self.region_name = region_name
|
||||||
self.arn = kwargs.get('Arn') or self._generate_arn(name)
|
self.arn = kwargs.get('Arn') or self._generate_arn(name)
|
||||||
self.event_pattern = kwargs.get('EventPattern')
|
self.event_pattern = kwargs.get('EventPattern')
|
||||||
self.schedule_exp = kwargs.get('ScheduleExpression')
|
self.schedule_exp = kwargs.get('ScheduleExpression')
|
||||||
@ -55,15 +60,20 @@ class EventsBackend(BaseBackend):
|
|||||||
ACCOUNT_ID = re.compile(r'^(\d{1,12}|\*)$')
|
ACCOUNT_ID = re.compile(r'^(\d{1,12}|\*)$')
|
||||||
STATEMENT_ID = re.compile(r'^[a-zA-Z0-9-_]{1,64}$')
|
STATEMENT_ID = re.compile(r'^[a-zA-Z0-9-_]{1,64}$')
|
||||||
|
|
||||||
def __init__(self):
|
def __init__(self, region_name):
|
||||||
self.rules = {}
|
self.rules = {}
|
||||||
# This array tracks the order in which the rules have been added, since
|
# This array tracks the order in which the rules have been added, since
|
||||||
# 2.6 doesn't have OrderedDicts.
|
# 2.6 doesn't have OrderedDicts.
|
||||||
self.rules_order = []
|
self.rules_order = []
|
||||||
self.next_tokens = {}
|
self.next_tokens = {}
|
||||||
|
self.region_name = region_name
|
||||||
self.permissions = {}
|
self.permissions = {}
|
||||||
|
|
||||||
|
def reset(self):
|
||||||
|
region_name = self.region_name
|
||||||
|
self.__dict__ = {}
|
||||||
|
self.__init__(region_name)
|
||||||
|
|
||||||
def _get_rule_by_index(self, i):
|
def _get_rule_by_index(self, i):
|
||||||
return self.rules.get(self.rules_order[i])
|
return self.rules.get(self.rules_order[i])
|
||||||
|
|
||||||
@ -173,7 +183,7 @@ class EventsBackend(BaseBackend):
|
|||||||
return return_obj
|
return return_obj
|
||||||
|
|
||||||
def put_rule(self, name, **kwargs):
|
def put_rule(self, name, **kwargs):
|
||||||
rule = Rule(name, **kwargs)
|
rule = Rule(name, self.region_name, **kwargs)
|
||||||
self.rules[rule.name] = rule
|
self.rules[rule.name] = rule
|
||||||
self.rules_order.append(rule.name)
|
self.rules_order.append(rule.name)
|
||||||
return rule.arn
|
return rule.arn
|
||||||
@ -229,7 +239,7 @@ class EventsBackend(BaseBackend):
|
|||||||
raise JsonRESTError('ResourceNotFoundException', 'StatementId not found')
|
raise JsonRESTError('ResourceNotFoundException', 'StatementId not found')
|
||||||
|
|
||||||
def describe_event_bus(self):
|
def describe_event_bus(self):
|
||||||
arn = "arn:aws:events:us-east-1:000000000000:event-bus/default"
|
arn = "arn:aws:events:{0}:000000000000:event-bus/default".format(self.region_name)
|
||||||
statements = []
|
statements = []
|
||||||
for statement_id, data in self.permissions.items():
|
for statement_id, data in self.permissions.items():
|
||||||
statements.append({
|
statements.append({
|
||||||
@ -248,4 +258,5 @@ class EventsBackend(BaseBackend):
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
events_backend = EventsBackend()
|
available_regions = boto3.session.Session().get_available_regions("events")
|
||||||
|
events_backends = {region: EventsBackend(region) for region in available_regions}
|
||||||
|
@ -2,11 +2,21 @@ import json
|
|||||||
import re
|
import re
|
||||||
|
|
||||||
from moto.core.responses import BaseResponse
|
from moto.core.responses import BaseResponse
|
||||||
from moto.events import events_backend
|
from moto.events import events_backends
|
||||||
|
|
||||||
|
|
||||||
class EventsHandler(BaseResponse):
|
class EventsHandler(BaseResponse):
|
||||||
|
|
||||||
|
@property
|
||||||
|
def events_backend(self):
|
||||||
|
"""
|
||||||
|
Events Backend
|
||||||
|
|
||||||
|
:return: Events Backend object
|
||||||
|
:rtype: moto.events.models.EventsBackend
|
||||||
|
"""
|
||||||
|
return events_backends[self.region]
|
||||||
|
|
||||||
def _generate_rule_dict(self, rule):
|
def _generate_rule_dict(self, rule):
|
||||||
return {
|
return {
|
||||||
'Name': rule.name,
|
'Name': rule.name,
|
||||||
@ -40,7 +50,7 @@ class EventsHandler(BaseResponse):
|
|||||||
|
|
||||||
if not name:
|
if not name:
|
||||||
return self.error('ValidationException', 'Parameter Name is required.')
|
return self.error('ValidationException', 'Parameter Name is required.')
|
||||||
events_backend.delete_rule(name)
|
self.events_backend.delete_rule(name)
|
||||||
|
|
||||||
return '', self.response_headers
|
return '', self.response_headers
|
||||||
|
|
||||||
@ -50,7 +60,7 @@ class EventsHandler(BaseResponse):
|
|||||||
if not name:
|
if not name:
|
||||||
return self.error('ValidationException', 'Parameter Name is required.')
|
return self.error('ValidationException', 'Parameter Name is required.')
|
||||||
|
|
||||||
rule = events_backend.describe_rule(name)
|
rule = self.events_backend.describe_rule(name)
|
||||||
|
|
||||||
if not rule:
|
if not rule:
|
||||||
return self.error('ResourceNotFoundException', 'Rule test does not exist.')
|
return self.error('ResourceNotFoundException', 'Rule test does not exist.')
|
||||||
@ -64,7 +74,7 @@ class EventsHandler(BaseResponse):
|
|||||||
if not name:
|
if not name:
|
||||||
return self.error('ValidationException', 'Parameter Name is required.')
|
return self.error('ValidationException', 'Parameter Name is required.')
|
||||||
|
|
||||||
if not events_backend.disable_rule(name):
|
if not self.events_backend.disable_rule(name):
|
||||||
return self.error('ResourceNotFoundException', 'Rule ' + name + ' does not exist.')
|
return self.error('ResourceNotFoundException', 'Rule ' + name + ' does not exist.')
|
||||||
|
|
||||||
return '', self.response_headers
|
return '', self.response_headers
|
||||||
@ -75,7 +85,7 @@ class EventsHandler(BaseResponse):
|
|||||||
if not name:
|
if not name:
|
||||||
return self.error('ValidationException', 'Parameter Name is required.')
|
return self.error('ValidationException', 'Parameter Name is required.')
|
||||||
|
|
||||||
if not events_backend.enable_rule(name):
|
if not self.events_backend.enable_rule(name):
|
||||||
return self.error('ResourceNotFoundException', 'Rule ' + name + ' does not exist.')
|
return self.error('ResourceNotFoundException', 'Rule ' + name + ' does not exist.')
|
||||||
|
|
||||||
return '', self.response_headers
|
return '', self.response_headers
|
||||||
@ -91,7 +101,7 @@ class EventsHandler(BaseResponse):
|
|||||||
if not target_arn:
|
if not target_arn:
|
||||||
return self.error('ValidationException', 'Parameter TargetArn is required.')
|
return self.error('ValidationException', 'Parameter TargetArn is required.')
|
||||||
|
|
||||||
rule_names = events_backend.list_rule_names_by_target(
|
rule_names = self.events_backend.list_rule_names_by_target(
|
||||||
target_arn, next_token, limit)
|
target_arn, next_token, limit)
|
||||||
|
|
||||||
return json.dumps(rule_names), self.response_headers
|
return json.dumps(rule_names), self.response_headers
|
||||||
@ -101,7 +111,7 @@ class EventsHandler(BaseResponse):
|
|||||||
next_token = self._get_param('NextToken')
|
next_token = self._get_param('NextToken')
|
||||||
limit = self._get_param('Limit')
|
limit = self._get_param('Limit')
|
||||||
|
|
||||||
rules = events_backend.list_rules(prefix, next_token, limit)
|
rules = self.events_backend.list_rules(prefix, next_token, limit)
|
||||||
rules_obj = {'Rules': []}
|
rules_obj = {'Rules': []}
|
||||||
|
|
||||||
for rule in rules['Rules']:
|
for rule in rules['Rules']:
|
||||||
@ -121,7 +131,7 @@ class EventsHandler(BaseResponse):
|
|||||||
return self.error('ValidationException', 'Parameter Rule is required.')
|
return self.error('ValidationException', 'Parameter Rule is required.')
|
||||||
|
|
||||||
try:
|
try:
|
||||||
targets = events_backend.list_targets_by_rule(
|
targets = self.events_backend.list_targets_by_rule(
|
||||||
rule_name, next_token, limit)
|
rule_name, next_token, limit)
|
||||||
except KeyError:
|
except KeyError:
|
||||||
return self.error('ResourceNotFoundException', 'Rule ' + rule_name + ' does not exist.')
|
return self.error('ResourceNotFoundException', 'Rule ' + rule_name + ' does not exist.')
|
||||||
@ -131,7 +141,7 @@ class EventsHandler(BaseResponse):
|
|||||||
def put_events(self):
|
def put_events(self):
|
||||||
events = self._get_param('Entries')
|
events = self._get_param('Entries')
|
||||||
|
|
||||||
failed_entries = events_backend.put_events(events)
|
failed_entries = self.events_backend.put_events(events)
|
||||||
|
|
||||||
if failed_entries:
|
if failed_entries:
|
||||||
return json.dumps({
|
return json.dumps({
|
||||||
@ -165,7 +175,7 @@ class EventsHandler(BaseResponse):
|
|||||||
re.match('^rate\(\d*\s(minute|minutes|hour|hours|day|days)\)', sched_exp)):
|
re.match('^rate\(\d*\s(minute|minutes|hour|hours|day|days)\)', sched_exp)):
|
||||||
return self.error('ValidationException', 'Parameter ScheduleExpression is not valid.')
|
return self.error('ValidationException', 'Parameter ScheduleExpression is not valid.')
|
||||||
|
|
||||||
rule_arn = events_backend.put_rule(
|
rule_arn = self.events_backend.put_rule(
|
||||||
name,
|
name,
|
||||||
ScheduleExpression=sched_exp,
|
ScheduleExpression=sched_exp,
|
||||||
EventPattern=event_pattern,
|
EventPattern=event_pattern,
|
||||||
@ -186,7 +196,7 @@ class EventsHandler(BaseResponse):
|
|||||||
if not targets:
|
if not targets:
|
||||||
return self.error('ValidationException', 'Parameter Targets is required.')
|
return self.error('ValidationException', 'Parameter Targets is required.')
|
||||||
|
|
||||||
if not events_backend.put_targets(rule_name, targets):
|
if not self.events_backend.put_targets(rule_name, targets):
|
||||||
return self.error('ResourceNotFoundException', 'Rule ' + rule_name + ' does not exist.')
|
return self.error('ResourceNotFoundException', 'Rule ' + rule_name + ' does not exist.')
|
||||||
|
|
||||||
return '', self.response_headers
|
return '', self.response_headers
|
||||||
@ -201,7 +211,7 @@ class EventsHandler(BaseResponse):
|
|||||||
if not ids:
|
if not ids:
|
||||||
return self.error('ValidationException', 'Parameter Ids is required.')
|
return self.error('ValidationException', 'Parameter Ids is required.')
|
||||||
|
|
||||||
if not events_backend.remove_targets(rule_name, ids):
|
if not self.events_backend.remove_targets(rule_name, ids):
|
||||||
return self.error('ResourceNotFoundException', 'Rule ' + rule_name + ' does not exist.')
|
return self.error('ResourceNotFoundException', 'Rule ' + rule_name + ' does not exist.')
|
||||||
|
|
||||||
return '', self.response_headers
|
return '', self.response_headers
|
||||||
@ -214,16 +224,16 @@ class EventsHandler(BaseResponse):
|
|||||||
principal = self._get_param('Principal')
|
principal = self._get_param('Principal')
|
||||||
statement_id = self._get_param('StatementId')
|
statement_id = self._get_param('StatementId')
|
||||||
|
|
||||||
events_backend.put_permission(action, principal, statement_id)
|
self.events_backend.put_permission(action, principal, statement_id)
|
||||||
|
|
||||||
return ''
|
return ''
|
||||||
|
|
||||||
def remove_permission(self):
|
def remove_permission(self):
|
||||||
statement_id = self._get_param('StatementId')
|
statement_id = self._get_param('StatementId')
|
||||||
|
|
||||||
events_backend.remove_permission(statement_id)
|
self.events_backend.remove_permission(statement_id)
|
||||||
|
|
||||||
return ''
|
return ''
|
||||||
|
|
||||||
def describe_event_bus(self):
|
def describe_event_bus(self):
|
||||||
return json.dumps(events_backend.describe_event_bus())
|
return json.dumps(self.events_backend.describe_event_bus())
|
||||||
|
@ -231,6 +231,19 @@ class LogGroup:
|
|||||||
def set_retention_policy(self, retention_in_days):
|
def set_retention_policy(self, retention_in_days):
|
||||||
self.retentionInDays = retention_in_days
|
self.retentionInDays = retention_in_days
|
||||||
|
|
||||||
|
def list_tags(self):
|
||||||
|
return self.tags if self.tags else {}
|
||||||
|
|
||||||
|
def tag(self, tags):
|
||||||
|
if self.tags:
|
||||||
|
self.tags.update(tags)
|
||||||
|
else:
|
||||||
|
self.tags = tags
|
||||||
|
|
||||||
|
def untag(self, tags_to_remove):
|
||||||
|
if self.tags:
|
||||||
|
self.tags = {k: v for (k, v) in self.tags.items() if k not in tags_to_remove}
|
||||||
|
|
||||||
|
|
||||||
class LogsBackend(BaseBackend):
|
class LogsBackend(BaseBackend):
|
||||||
def __init__(self, region_name):
|
def __init__(self, region_name):
|
||||||
@ -322,5 +335,23 @@ class LogsBackend(BaseBackend):
|
|||||||
log_group = self.groups[log_group_name]
|
log_group = self.groups[log_group_name]
|
||||||
return log_group.set_retention_policy(None)
|
return log_group.set_retention_policy(None)
|
||||||
|
|
||||||
|
def list_tags_log_group(self, log_group_name):
|
||||||
|
if log_group_name not in self.groups:
|
||||||
|
raise ResourceNotFoundException()
|
||||||
|
log_group = self.groups[log_group_name]
|
||||||
|
return log_group.list_tags()
|
||||||
|
|
||||||
|
def tag_log_group(self, log_group_name, tags):
|
||||||
|
if log_group_name not in self.groups:
|
||||||
|
raise ResourceNotFoundException()
|
||||||
|
log_group = self.groups[log_group_name]
|
||||||
|
log_group.tag(tags)
|
||||||
|
|
||||||
|
def untag_log_group(self, log_group_name, tags):
|
||||||
|
if log_group_name not in self.groups:
|
||||||
|
raise ResourceNotFoundException()
|
||||||
|
log_group = self.groups[log_group_name]
|
||||||
|
log_group.untag(tags)
|
||||||
|
|
||||||
|
|
||||||
logs_backends = {region.name: LogsBackend(region.name) for region in boto.logs.regions()}
|
logs_backends = {region.name: LogsBackend(region.name) for region in boto.logs.regions()}
|
||||||
|
@ -134,3 +134,22 @@ class LogsResponse(BaseResponse):
|
|||||||
log_group_name = self._get_param('logGroupName')
|
log_group_name = self._get_param('logGroupName')
|
||||||
self.logs_backend.delete_retention_policy(log_group_name)
|
self.logs_backend.delete_retention_policy(log_group_name)
|
||||||
return ''
|
return ''
|
||||||
|
|
||||||
|
def list_tags_log_group(self):
|
||||||
|
log_group_name = self._get_param('logGroupName')
|
||||||
|
tags = self.logs_backend.list_tags_log_group(log_group_name)
|
||||||
|
return json.dumps({
|
||||||
|
'tags': tags
|
||||||
|
})
|
||||||
|
|
||||||
|
def tag_log_group(self):
|
||||||
|
log_group_name = self._get_param('logGroupName')
|
||||||
|
tags = self._get_param('tags')
|
||||||
|
self.logs_backend.tag_log_group(log_group_name, tags)
|
||||||
|
return ''
|
||||||
|
|
||||||
|
def untag_log_group(self):
|
||||||
|
log_group_name = self._get_param('logGroupName')
|
||||||
|
tags = self._get_param('tags')
|
||||||
|
self.logs_backend.untag_log_group(log_group_name, tags)
|
||||||
|
return ''
|
||||||
|
70
moto/s3/config.py
Normal file
70
moto/s3/config.py
Normal file
@ -0,0 +1,70 @@
|
|||||||
|
from moto.core.exceptions import InvalidNextTokenException
|
||||||
|
from moto.core.models import ConfigQueryModel
|
||||||
|
from moto.s3 import s3_backends
|
||||||
|
|
||||||
|
|
||||||
|
class S3ConfigQuery(ConfigQueryModel):
|
||||||
|
|
||||||
|
def list_config_service_resources(self, resource_ids, resource_name, limit, next_token, backend_region=None, resource_region=None):
|
||||||
|
# S3 need not care about "backend_region" as S3 is global. The resource_region only matters for aggregated queries as you can
|
||||||
|
# filter on bucket regions for them. For other resource types, you would need to iterate appropriately for the backend_region.
|
||||||
|
|
||||||
|
# Resource IDs are the same as S3 bucket names
|
||||||
|
# For aggregation -- did we get both a resource ID and a resource name?
|
||||||
|
if resource_ids and resource_name:
|
||||||
|
# If the values are different, then return an empty list:
|
||||||
|
if resource_name not in resource_ids:
|
||||||
|
return [], None
|
||||||
|
|
||||||
|
# If no filter was passed in for resource names/ids then return them all:
|
||||||
|
if not resource_ids and not resource_name:
|
||||||
|
bucket_list = list(self.backends['global'].buckets.keys())
|
||||||
|
|
||||||
|
else:
|
||||||
|
# Match the resource name / ID:
|
||||||
|
bucket_list = []
|
||||||
|
filter_buckets = [resource_name] if resource_name else resource_ids
|
||||||
|
|
||||||
|
for bucket in self.backends['global'].buckets.keys():
|
||||||
|
if bucket in filter_buckets:
|
||||||
|
bucket_list.append(bucket)
|
||||||
|
|
||||||
|
# If a resource_region was supplied (aggregated only), then filter on bucket region too:
|
||||||
|
if resource_region:
|
||||||
|
region_buckets = []
|
||||||
|
|
||||||
|
for bucket in bucket_list:
|
||||||
|
if self.backends['global'].buckets[bucket].region_name == resource_region:
|
||||||
|
region_buckets.append(bucket)
|
||||||
|
|
||||||
|
bucket_list = region_buckets
|
||||||
|
|
||||||
|
if not bucket_list:
|
||||||
|
return [], None
|
||||||
|
|
||||||
|
# Pagination logic:
|
||||||
|
sorted_buckets = sorted(bucket_list)
|
||||||
|
new_token = None
|
||||||
|
|
||||||
|
# Get the start:
|
||||||
|
if not next_token:
|
||||||
|
start = 0
|
||||||
|
else:
|
||||||
|
# Tokens for this moto feature is just the bucket name:
|
||||||
|
# For OTHER non-global resource types, it's the region concatenated with the resource ID.
|
||||||
|
if next_token not in sorted_buckets:
|
||||||
|
raise InvalidNextTokenException()
|
||||||
|
|
||||||
|
start = sorted_buckets.index(next_token)
|
||||||
|
|
||||||
|
# Get the list of items to collect:
|
||||||
|
bucket_list = sorted_buckets[start:(start + limit)]
|
||||||
|
|
||||||
|
if len(sorted_buckets) > (start + limit):
|
||||||
|
new_token = sorted_buckets[start + limit]
|
||||||
|
|
||||||
|
return [{'type': 'AWS::S3::Bucket', 'id': bucket, 'name': bucket, 'region': self.backends['global'].buckets[bucket].region_name}
|
||||||
|
for bucket in bucket_list], new_token
|
||||||
|
|
||||||
|
|
||||||
|
s3_config_query = S3ConfigQuery(s3_backends)
|
@ -913,11 +913,11 @@ class ResponseObject(_TemplateEnvironmentMixin, ActionAuthenticatorMixin):
|
|||||||
# Copy key
|
# Copy key
|
||||||
# you can have a quoted ?version=abc with a version Id, so work on
|
# you can have a quoted ?version=abc with a version Id, so work on
|
||||||
# we need to parse the unquoted string first
|
# we need to parse the unquoted string first
|
||||||
src_key = clean_key_name(request.headers.get("x-amz-copy-source"))
|
src_key = request.headers.get("x-amz-copy-source")
|
||||||
if isinstance(src_key, six.binary_type):
|
if isinstance(src_key, six.binary_type):
|
||||||
src_key = src_key.decode('utf-8')
|
src_key = src_key.decode('utf-8')
|
||||||
src_key_parsed = urlparse(src_key)
|
src_key_parsed = urlparse(src_key)
|
||||||
src_bucket, src_key = unquote(src_key_parsed.path).\
|
src_bucket, src_key = clean_key_name(src_key_parsed.path).\
|
||||||
lstrip("/").split("/", 1)
|
lstrip("/").split("/", 1)
|
||||||
src_version_id = parse_qs(src_key_parsed.query).get(
|
src_version_id = parse_qs(src_key_parsed.query).get(
|
||||||
'versionId', [None])[0]
|
'versionId', [None])[0]
|
||||||
|
@ -49,6 +49,21 @@ class Message(BaseModel):
|
|||||||
self.destinations = destinations
|
self.destinations = destinations
|
||||||
|
|
||||||
|
|
||||||
|
class TemplateMessage(BaseModel):
|
||||||
|
|
||||||
|
def __init__(self,
|
||||||
|
message_id,
|
||||||
|
source,
|
||||||
|
template,
|
||||||
|
template_data,
|
||||||
|
destinations):
|
||||||
|
self.id = message_id
|
||||||
|
self.source = source
|
||||||
|
self.template = template
|
||||||
|
self.template_data = template_data
|
||||||
|
self.destinations = destinations
|
||||||
|
|
||||||
|
|
||||||
class RawMessage(BaseModel):
|
class RawMessage(BaseModel):
|
||||||
|
|
||||||
def __init__(self, message_id, source, destinations, raw_data):
|
def __init__(self, message_id, source, destinations, raw_data):
|
||||||
@ -123,10 +138,34 @@ class SESBackend(BaseBackend):
|
|||||||
self.sent_message_count += recipient_count
|
self.sent_message_count += recipient_count
|
||||||
return message
|
return message
|
||||||
|
|
||||||
|
def send_templated_email(self, source, template, template_data, destinations, region):
|
||||||
|
recipient_count = sum(map(len, destinations.values()))
|
||||||
|
if recipient_count > RECIPIENT_LIMIT:
|
||||||
|
raise MessageRejectedError('Too many recipients.')
|
||||||
|
if not self._is_verified_address(source):
|
||||||
|
raise MessageRejectedError(
|
||||||
|
"Email address not verified %s" % source
|
||||||
|
)
|
||||||
|
|
||||||
|
self.__process_sns_feedback__(source, destinations, region)
|
||||||
|
|
||||||
|
message_id = get_random_message_id()
|
||||||
|
message = TemplateMessage(message_id,
|
||||||
|
source,
|
||||||
|
template,
|
||||||
|
template_data,
|
||||||
|
destinations)
|
||||||
|
self.sent_messages.append(message)
|
||||||
|
self.sent_message_count += recipient_count
|
||||||
|
return message
|
||||||
|
|
||||||
def __type_of_message__(self, destinations):
|
def __type_of_message__(self, destinations):
|
||||||
"""Checks the destination for any special address that could indicate delivery, complaint or bounce
|
"""Checks the destination for any special address that could indicate delivery,
|
||||||
like in SES simualtor"""
|
complaint or bounce like in SES simualtor"""
|
||||||
alladdress = destinations.get("ToAddresses", []) + destinations.get("CcAddresses", []) + destinations.get("BccAddresses", [])
|
alladdress = destinations.get(
|
||||||
|
"ToAddresses", []) + destinations.get(
|
||||||
|
"CcAddresses", []) + destinations.get(
|
||||||
|
"BccAddresses", [])
|
||||||
for addr in alladdress:
|
for addr in alladdress:
|
||||||
if SESFeedback.SUCCESS_ADDR in addr:
|
if SESFeedback.SUCCESS_ADDR in addr:
|
||||||
return SESFeedback.DELIVERY
|
return SESFeedback.DELIVERY
|
||||||
|
@ -74,6 +74,33 @@ class EmailResponse(BaseResponse):
|
|||||||
template = self.response_template(SEND_EMAIL_RESPONSE)
|
template = self.response_template(SEND_EMAIL_RESPONSE)
|
||||||
return template.render(message=message)
|
return template.render(message=message)
|
||||||
|
|
||||||
|
def send_templated_email(self):
|
||||||
|
source = self.querystring.get('Source')[0]
|
||||||
|
template = self.querystring.get('Template')
|
||||||
|
template_data = self.querystring.get('TemplateData')
|
||||||
|
|
||||||
|
destinations = {
|
||||||
|
'ToAddresses': [],
|
||||||
|
'CcAddresses': [],
|
||||||
|
'BccAddresses': [],
|
||||||
|
}
|
||||||
|
for dest_type in destinations:
|
||||||
|
# consume up to 51 to allow exception
|
||||||
|
for i in six.moves.range(1, 52):
|
||||||
|
field = 'Destination.%s.member.%s' % (dest_type, i)
|
||||||
|
address = self.querystring.get(field)
|
||||||
|
if address is None:
|
||||||
|
break
|
||||||
|
destinations[dest_type].append(address[0])
|
||||||
|
|
||||||
|
message = ses_backend.send_templated_email(source,
|
||||||
|
template,
|
||||||
|
template_data,
|
||||||
|
destinations,
|
||||||
|
self.region)
|
||||||
|
template = self.response_template(SEND_TEMPLATED_EMAIL_RESPONSE)
|
||||||
|
return template.render(message=message)
|
||||||
|
|
||||||
def send_raw_email(self):
|
def send_raw_email(self):
|
||||||
source = self.querystring.get('Source')
|
source = self.querystring.get('Source')
|
||||||
if source is not None:
|
if source is not None:
|
||||||
@ -193,6 +220,15 @@ SEND_EMAIL_RESPONSE = """<SendEmailResponse xmlns="http://ses.amazonaws.com/doc/
|
|||||||
</ResponseMetadata>
|
</ResponseMetadata>
|
||||||
</SendEmailResponse>"""
|
</SendEmailResponse>"""
|
||||||
|
|
||||||
|
SEND_TEMPLATED_EMAIL_RESPONSE = """<SendTemplatedEmailResponse xmlns="http://ses.amazonaws.com/doc/2010-12-01/">
|
||||||
|
<SendTemplatedEmailResult>
|
||||||
|
<MessageId>{{ message.id }}</MessageId>
|
||||||
|
</SendTemplatedEmailResult>
|
||||||
|
<ResponseMetadata>
|
||||||
|
<RequestId>d5964849-c866-11e0-9beb-01a62d68c57f</RequestId>
|
||||||
|
</ResponseMetadata>
|
||||||
|
</SendTemplatedEmailResponse>"""
|
||||||
|
|
||||||
SEND_RAW_EMAIL_RESPONSE = """<SendRawEmailResponse xmlns="http://ses.amazonaws.com/doc/2010-12-01/">
|
SEND_RAW_EMAIL_RESPONSE = """<SendRawEmailResponse xmlns="http://ses.amazonaws.com/doc/2010-12-01/">
|
||||||
<SendRawEmailResult>
|
<SendRawEmailResult>
|
||||||
<MessageId>{{ message.id }}</MessageId>
|
<MessageId>{{ message.id }}</MessageId>
|
||||||
|
@ -59,7 +59,7 @@ class StepFunctionBackend(BaseBackend):
|
|||||||
u'\u0090', u'\u0091', u'\u0092', u'\u0093', u'\u0094', u'\u0095',
|
u'\u0090', u'\u0091', u'\u0092', u'\u0093', u'\u0094', u'\u0095',
|
||||||
u'\u0096', u'\u0097', u'\u0098', u'\u0099',
|
u'\u0096', u'\u0097', u'\u0098', u'\u0099',
|
||||||
u'\u009A', u'\u009B', u'\u009C', u'\u009D', u'\u009E', u'\u009F']
|
u'\u009A', u'\u009B', u'\u009C', u'\u009D', u'\u009E', u'\u009F']
|
||||||
accepted_role_arn_format = re.compile('arn:aws:iam:(?P<account_id>[0-9]{12}):role/.+')
|
accepted_role_arn_format = re.compile('arn:aws:iam::(?P<account_id>[0-9]{12}):role/.+')
|
||||||
accepted_mchn_arn_format = re.compile('arn:aws:states:[-0-9a-zA-Z]+:(?P<account_id>[0-9]{12}):stateMachine:.+')
|
accepted_mchn_arn_format = re.compile('arn:aws:states:[-0-9a-zA-Z]+:(?P<account_id>[0-9]{12}):stateMachine:.+')
|
||||||
accepted_exec_arn_format = re.compile('arn:aws:states:[-0-9a-zA-Z]+:(?P<account_id>[0-9]{12}):execution:.+')
|
accepted_exec_arn_format = re.compile('arn:aws:states:[-0-9a-zA-Z]+:(?P<account_id>[0-9]{12}):execution:.+')
|
||||||
|
|
||||||
@ -96,12 +96,12 @@ class StepFunctionBackend(BaseBackend):
|
|||||||
if sm:
|
if sm:
|
||||||
self.state_machines.remove(sm)
|
self.state_machines.remove(sm)
|
||||||
|
|
||||||
def start_execution(self, state_machine_arn):
|
def start_execution(self, state_machine_arn, name=None):
|
||||||
state_machine_name = self.describe_state_machine(state_machine_arn).name
|
state_machine_name = self.describe_state_machine(state_machine_arn).name
|
||||||
execution = Execution(region_name=self.region_name,
|
execution = Execution(region_name=self.region_name,
|
||||||
account_id=self._get_account_id(),
|
account_id=self._get_account_id(),
|
||||||
state_machine_name=state_machine_name,
|
state_machine_name=state_machine_name,
|
||||||
execution_name=str(uuid4()),
|
execution_name=name or str(uuid4()),
|
||||||
state_machine_arn=state_machine_arn)
|
state_machine_arn=state_machine_arn)
|
||||||
self.executions.append(execution)
|
self.executions.append(execution)
|
||||||
return execution
|
return execution
|
||||||
@ -143,7 +143,7 @@ class StepFunctionBackend(BaseBackend):
|
|||||||
def _validate_machine_arn(self, machine_arn):
|
def _validate_machine_arn(self, machine_arn):
|
||||||
self._validate_arn(arn=machine_arn,
|
self._validate_arn(arn=machine_arn,
|
||||||
regex=self.accepted_mchn_arn_format,
|
regex=self.accepted_mchn_arn_format,
|
||||||
invalid_msg="Invalid Role Arn: '" + machine_arn + "'")
|
invalid_msg="Invalid State Machine Arn: '" + machine_arn + "'")
|
||||||
|
|
||||||
def _validate_execution_arn(self, execution_arn):
|
def _validate_execution_arn(self, execution_arn):
|
||||||
self._validate_arn(arn=execution_arn,
|
self._validate_arn(arn=execution_arn,
|
||||||
|
@ -86,7 +86,8 @@ class StepFunctionResponse(BaseResponse):
|
|||||||
@amzn_request_id
|
@amzn_request_id
|
||||||
def start_execution(self):
|
def start_execution(self):
|
||||||
arn = self._get_param('stateMachineArn')
|
arn = self._get_param('stateMachineArn')
|
||||||
execution = self.stepfunction_backend.start_execution(arn)
|
name = self._get_param('name')
|
||||||
|
execution = self.stepfunction_backend.start_execution(arn, name)
|
||||||
response = {'executionArn': execution.execution_arn,
|
response = {'executionArn': execution.execution_arn,
|
||||||
'startDate': execution.start_date}
|
'startDate': execution.start_date}
|
||||||
return 200, {}, json.dumps(response)
|
return 200, {}, json.dumps(response)
|
||||||
|
@ -128,8 +128,7 @@ GET_FEDERATION_TOKEN_RESPONSE = """<GetFederationTokenResponse xmlns="https://st
|
|||||||
</GetFederationTokenResponse>"""
|
</GetFederationTokenResponse>"""
|
||||||
|
|
||||||
|
|
||||||
ASSUME_ROLE_RESPONSE = """<AssumeRoleResponse xmlns="https://sts.amazonaws.com/doc/
|
ASSUME_ROLE_RESPONSE = """<AssumeRoleResponse xmlns="https://sts.amazonaws.com/doc/2011-06-15/">
|
||||||
2011-06-15/">
|
|
||||||
<AssumeRoleResult>
|
<AssumeRoleResult>
|
||||||
<Credentials>
|
<Credentials>
|
||||||
<SessionToken>{{ role.session_token }}</SessionToken>
|
<SessionToken>{{ role.session_token }}</SessionToken>
|
||||||
|
@ -4,6 +4,7 @@ import boto3
|
|||||||
from botocore.exceptions import ClientError
|
from botocore.exceptions import ClientError
|
||||||
from nose.tools import assert_raises
|
from nose.tools import assert_raises
|
||||||
|
|
||||||
|
from moto import mock_s3
|
||||||
from moto.config import mock_config
|
from moto.config import mock_config
|
||||||
|
|
||||||
|
|
||||||
@ -1009,3 +1010,177 @@ def test_delete_delivery_channel():
|
|||||||
with assert_raises(ClientError) as ce:
|
with assert_raises(ClientError) as ce:
|
||||||
client.delete_delivery_channel(DeliveryChannelName='testchannel')
|
client.delete_delivery_channel(DeliveryChannelName='testchannel')
|
||||||
assert ce.exception.response['Error']['Code'] == 'NoSuchDeliveryChannelException'
|
assert ce.exception.response['Error']['Code'] == 'NoSuchDeliveryChannelException'
|
||||||
|
|
||||||
|
|
||||||
|
@mock_config
|
||||||
|
@mock_s3
|
||||||
|
def test_list_discovered_resource():
|
||||||
|
"""NOTE: We are only really testing the Config part. For each individual service, please add tests
|
||||||
|
for that individual service's "list_config_service_resources" function.
|
||||||
|
"""
|
||||||
|
client = boto3.client('config', region_name='us-west-2')
|
||||||
|
|
||||||
|
# With nothing created yet:
|
||||||
|
assert not client.list_discovered_resources(resourceType='AWS::S3::Bucket')['resourceIdentifiers']
|
||||||
|
|
||||||
|
# Create some S3 buckets:
|
||||||
|
s3_client = boto3.client('s3', region_name='us-west-2')
|
||||||
|
for x in range(0, 10):
|
||||||
|
s3_client.create_bucket(Bucket='bucket{}'.format(x), CreateBucketConfiguration={'LocationConstraint': 'us-west-2'})
|
||||||
|
|
||||||
|
# Now try:
|
||||||
|
result = client.list_discovered_resources(resourceType='AWS::S3::Bucket')
|
||||||
|
assert len(result['resourceIdentifiers']) == 10
|
||||||
|
for x in range(0, 10):
|
||||||
|
assert result['resourceIdentifiers'][x] == {
|
||||||
|
'resourceType': 'AWS::S3::Bucket',
|
||||||
|
'resourceId': 'bucket{}'.format(x),
|
||||||
|
'resourceName': 'bucket{}'.format(x)
|
||||||
|
}
|
||||||
|
assert not result.get('nextToken')
|
||||||
|
|
||||||
|
# Test that pagination places a proper nextToken in the response and also that the limit works:
|
||||||
|
result = client.list_discovered_resources(resourceType='AWS::S3::Bucket', limit=1, nextToken='bucket1')
|
||||||
|
assert len(result['resourceIdentifiers']) == 1
|
||||||
|
assert result['nextToken'] == 'bucket2'
|
||||||
|
|
||||||
|
# Try with a resource name:
|
||||||
|
result = client.list_discovered_resources(resourceType='AWS::S3::Bucket', limit=1, resourceName='bucket1')
|
||||||
|
assert len(result['resourceIdentifiers']) == 1
|
||||||
|
assert not result.get('nextToken')
|
||||||
|
|
||||||
|
# Try with a resource ID:
|
||||||
|
result = client.list_discovered_resources(resourceType='AWS::S3::Bucket', limit=1, resourceIds=['bucket1'])
|
||||||
|
assert len(result['resourceIdentifiers']) == 1
|
||||||
|
assert not result.get('nextToken')
|
||||||
|
|
||||||
|
# Try with duplicated resource IDs:
|
||||||
|
result = client.list_discovered_resources(resourceType='AWS::S3::Bucket', limit=1, resourceIds=['bucket1', 'bucket1'])
|
||||||
|
assert len(result['resourceIdentifiers']) == 1
|
||||||
|
assert not result.get('nextToken')
|
||||||
|
|
||||||
|
# Test with an invalid resource type:
|
||||||
|
assert not client.list_discovered_resources(resourceType='LOL::NOT::A::RESOURCE::TYPE')['resourceIdentifiers']
|
||||||
|
|
||||||
|
# Test with an invalid page num > 100:
|
||||||
|
with assert_raises(ClientError) as ce:
|
||||||
|
client.list_discovered_resources(resourceType='AWS::S3::Bucket', limit=101)
|
||||||
|
assert '101' in ce.exception.response['Error']['Message']
|
||||||
|
|
||||||
|
# Test by supplying both resourceName and also resourceIds:
|
||||||
|
with assert_raises(ClientError) as ce:
|
||||||
|
client.list_discovered_resources(resourceType='AWS::S3::Bucket', resourceName='whats', resourceIds=['up', 'doc'])
|
||||||
|
assert 'Both Resource ID and Resource Name cannot be specified in the request' in ce.exception.response['Error']['Message']
|
||||||
|
|
||||||
|
# More than 20 resourceIds:
|
||||||
|
resource_ids = ['{}'.format(x) for x in range(0, 21)]
|
||||||
|
with assert_raises(ClientError) as ce:
|
||||||
|
client.list_discovered_resources(resourceType='AWS::S3::Bucket', resourceIds=resource_ids)
|
||||||
|
assert 'The specified list had more than 20 resource ID\'s.' in ce.exception.response['Error']['Message']
|
||||||
|
|
||||||
|
|
||||||
|
@mock_config
|
||||||
|
@mock_s3
|
||||||
|
def test_list_aggregate_discovered_resource():
|
||||||
|
"""NOTE: We are only really testing the Config part. For each individual service, please add tests
|
||||||
|
for that individual service's "list_config_service_resources" function.
|
||||||
|
"""
|
||||||
|
client = boto3.client('config', region_name='us-west-2')
|
||||||
|
|
||||||
|
# Without an aggregator:
|
||||||
|
with assert_raises(ClientError) as ce:
|
||||||
|
client.list_aggregate_discovered_resources(ConfigurationAggregatorName='lolno', ResourceType='AWS::S3::Bucket')
|
||||||
|
assert 'The configuration aggregator does not exist' in ce.exception.response['Error']['Message']
|
||||||
|
|
||||||
|
# Create the aggregator:
|
||||||
|
account_aggregation_source = {
|
||||||
|
'AccountIds': [
|
||||||
|
'012345678910',
|
||||||
|
'111111111111',
|
||||||
|
'222222222222'
|
||||||
|
],
|
||||||
|
'AllAwsRegions': True
|
||||||
|
}
|
||||||
|
client.put_configuration_aggregator(
|
||||||
|
ConfigurationAggregatorName='testing',
|
||||||
|
AccountAggregationSources=[account_aggregation_source]
|
||||||
|
)
|
||||||
|
|
||||||
|
# With nothing created yet:
|
||||||
|
assert not client.list_aggregate_discovered_resources(ConfigurationAggregatorName='testing',
|
||||||
|
ResourceType='AWS::S3::Bucket')['ResourceIdentifiers']
|
||||||
|
|
||||||
|
# Create some S3 buckets:
|
||||||
|
s3_client = boto3.client('s3', region_name='us-west-2')
|
||||||
|
for x in range(0, 10):
|
||||||
|
s3_client.create_bucket(Bucket='bucket{}'.format(x), CreateBucketConfiguration={'LocationConstraint': 'us-west-2'})
|
||||||
|
|
||||||
|
s3_client_eu = boto3.client('s3', region_name='eu-west-1')
|
||||||
|
for x in range(10, 12):
|
||||||
|
s3_client_eu.create_bucket(Bucket='eu-bucket{}'.format(x), CreateBucketConfiguration={'LocationConstraint': 'eu-west-1'})
|
||||||
|
|
||||||
|
# Now try:
|
||||||
|
result = client.list_aggregate_discovered_resources(ConfigurationAggregatorName='testing', ResourceType='AWS::S3::Bucket')
|
||||||
|
assert len(result['ResourceIdentifiers']) == 12
|
||||||
|
for x in range(0, 10):
|
||||||
|
assert result['ResourceIdentifiers'][x] == {
|
||||||
|
'SourceAccountId': '123456789012',
|
||||||
|
'ResourceType': 'AWS::S3::Bucket',
|
||||||
|
'ResourceId': 'bucket{}'.format(x),
|
||||||
|
'ResourceName': 'bucket{}'.format(x),
|
||||||
|
'SourceRegion': 'us-west-2'
|
||||||
|
}
|
||||||
|
for x in range(11, 12):
|
||||||
|
assert result['ResourceIdentifiers'][x] == {
|
||||||
|
'SourceAccountId': '123456789012',
|
||||||
|
'ResourceType': 'AWS::S3::Bucket',
|
||||||
|
'ResourceId': 'eu-bucket{}'.format(x),
|
||||||
|
'ResourceName': 'eu-bucket{}'.format(x),
|
||||||
|
'SourceRegion': 'eu-west-1'
|
||||||
|
}
|
||||||
|
|
||||||
|
assert not result.get('NextToken')
|
||||||
|
|
||||||
|
# Test that pagination places a proper nextToken in the response and also that the limit works:
|
||||||
|
result = client.list_aggregate_discovered_resources(ConfigurationAggregatorName='testing', ResourceType='AWS::S3::Bucket',
|
||||||
|
Limit=1, NextToken='bucket1')
|
||||||
|
assert len(result['ResourceIdentifiers']) == 1
|
||||||
|
assert result['NextToken'] == 'bucket2'
|
||||||
|
|
||||||
|
# Try with a resource name:
|
||||||
|
result = client.list_aggregate_discovered_resources(ConfigurationAggregatorName='testing', ResourceType='AWS::S3::Bucket',
|
||||||
|
Limit=1, NextToken='bucket1', Filters={'ResourceName': 'bucket1'})
|
||||||
|
assert len(result['ResourceIdentifiers']) == 1
|
||||||
|
assert not result.get('NextToken')
|
||||||
|
|
||||||
|
# Try with a resource ID:
|
||||||
|
result = client.list_aggregate_discovered_resources(ConfigurationAggregatorName='testing', ResourceType='AWS::S3::Bucket',
|
||||||
|
Limit=1, NextToken='bucket1', Filters={'ResourceId': 'bucket1'})
|
||||||
|
assert len(result['ResourceIdentifiers']) == 1
|
||||||
|
assert not result.get('NextToken')
|
||||||
|
|
||||||
|
# Try with a region specified:
|
||||||
|
result = client.list_aggregate_discovered_resources(ConfigurationAggregatorName='testing', ResourceType='AWS::S3::Bucket',
|
||||||
|
Filters={'Region': 'eu-west-1'})
|
||||||
|
assert len(result['ResourceIdentifiers']) == 2
|
||||||
|
assert result['ResourceIdentifiers'][0]['SourceRegion'] == 'eu-west-1'
|
||||||
|
assert not result.get('NextToken')
|
||||||
|
|
||||||
|
# Try with both name and id set to the incorrect values:
|
||||||
|
assert not client.list_aggregate_discovered_resources(ConfigurationAggregatorName='testing', ResourceType='AWS::S3::Bucket',
|
||||||
|
Filters={'ResourceId': 'bucket1',
|
||||||
|
'ResourceName': 'bucket2'})['ResourceIdentifiers']
|
||||||
|
|
||||||
|
# Test with an invalid resource type:
|
||||||
|
assert not client.list_aggregate_discovered_resources(ConfigurationAggregatorName='testing',
|
||||||
|
ResourceType='LOL::NOT::A::RESOURCE::TYPE')['ResourceIdentifiers']
|
||||||
|
|
||||||
|
# Try with correct name but incorrect region:
|
||||||
|
assert not client.list_aggregate_discovered_resources(ConfigurationAggregatorName='testing', ResourceType='AWS::S3::Bucket',
|
||||||
|
Filters={'ResourceId': 'bucket1',
|
||||||
|
'Region': 'us-west-1'})['ResourceIdentifiers']
|
||||||
|
|
||||||
|
# Test with an invalid page num > 100:
|
||||||
|
with assert_raises(ClientError) as ce:
|
||||||
|
client.list_aggregate_discovered_resources(ConfigurationAggregatorName='testing', ResourceType='AWS::S3::Bucket', Limit=101)
|
||||||
|
assert '101' in ce.exception.response['Error']['Message']
|
||||||
|
@ -1676,15 +1676,7 @@ def test_query_global_secondary_index_when_created_via_update_table_resource():
|
|||||||
{
|
{
|
||||||
'AttributeName': 'user_id',
|
'AttributeName': 'user_id',
|
||||||
'AttributeType': 'N',
|
'AttributeType': 'N',
|
||||||
},
|
}
|
||||||
{
|
|
||||||
'AttributeName': 'forum_name',
|
|
||||||
'AttributeType': 'S'
|
|
||||||
},
|
|
||||||
{
|
|
||||||
'AttributeName': 'subject',
|
|
||||||
'AttributeType': 'S'
|
|
||||||
},
|
|
||||||
],
|
],
|
||||||
ProvisionedThroughput={
|
ProvisionedThroughput={
|
||||||
'ReadCapacityUnits': 5,
|
'ReadCapacityUnits': 5,
|
||||||
@ -2258,6 +2250,34 @@ def test_batch_items_should_throw_exception_for_duplicate_request():
|
|||||||
ex.exception.response['Error']['Message'].should.equal('Provided list of item keys contains duplicates')
|
ex.exception.response['Error']['Message'].should.equal('Provided list of item keys contains duplicates')
|
||||||
|
|
||||||
|
|
||||||
|
@mock_dynamodb2
|
||||||
|
def test_index_with_unknown_attributes_should_fail():
|
||||||
|
dynamodb = boto3.client('dynamodb', region_name='us-east-1')
|
||||||
|
|
||||||
|
expected_exception = 'Some index key attributes are not defined in AttributeDefinitions.'
|
||||||
|
|
||||||
|
with assert_raises(ClientError) as ex:
|
||||||
|
dynamodb.create_table(
|
||||||
|
AttributeDefinitions=[
|
||||||
|
{'AttributeName': 'customer_nr', 'AttributeType': 'S'},
|
||||||
|
{'AttributeName': 'last_name', 'AttributeType': 'S'}],
|
||||||
|
TableName='table_with_missing_attribute_definitions',
|
||||||
|
KeySchema=[
|
||||||
|
{'AttributeName': 'customer_nr', 'KeyType': 'HASH'},
|
||||||
|
{'AttributeName': 'last_name', 'KeyType': 'RANGE'}],
|
||||||
|
LocalSecondaryIndexes=[{
|
||||||
|
'IndexName': 'indexthataddsanadditionalattribute',
|
||||||
|
'KeySchema': [
|
||||||
|
{'AttributeName': 'customer_nr', 'KeyType': 'HASH'},
|
||||||
|
{'AttributeName': 'postcode', 'KeyType': 'RANGE'}],
|
||||||
|
'Projection': { 'ProjectionType': 'ALL' }
|
||||||
|
}],
|
||||||
|
BillingMode='PAY_PER_REQUEST')
|
||||||
|
|
||||||
|
ex.exception.response['Error']['Code'].should.equal('ValidationException')
|
||||||
|
ex.exception.response['Error']['Message'].should.contain(expected_exception)
|
||||||
|
|
||||||
|
|
||||||
def _create_user_table():
|
def _create_user_table():
|
||||||
client = boto3.client('dynamodb', region_name='us-east-1')
|
client = boto3.client('dynamodb', region_name='us-east-1')
|
||||||
client.create_table(
|
client.create_table(
|
||||||
|
@ -1765,6 +1765,14 @@ def test_boto3_update_table_gsi_throughput():
|
|||||||
'AttributeName': 'subject',
|
'AttributeName': 'subject',
|
||||||
'AttributeType': 'S'
|
'AttributeType': 'S'
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
'AttributeName': 'username',
|
||||||
|
'AttributeType': 'S'
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'AttributeName': 'created',
|
||||||
|
'AttributeType': 'S'
|
||||||
|
}
|
||||||
],
|
],
|
||||||
ProvisionedThroughput={
|
ProvisionedThroughput={
|
||||||
'ReadCapacityUnits': 5,
|
'ReadCapacityUnits': 5,
|
||||||
@ -1939,6 +1947,14 @@ def test_update_table_gsi_throughput():
|
|||||||
'AttributeName': 'subject',
|
'AttributeName': 'subject',
|
||||||
'AttributeType': 'S'
|
'AttributeType': 'S'
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
'AttributeName': 'username',
|
||||||
|
'AttributeType': 'S'
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'AttributeName': 'created',
|
||||||
|
'AttributeType': 'S'
|
||||||
|
}
|
||||||
],
|
],
|
||||||
ProvisionedThroughput={
|
ProvisionedThroughput={
|
||||||
'ReadCapacityUnits': 5,
|
'ReadCapacityUnits': 5,
|
||||||
|
@ -34,7 +34,7 @@ def test_create_cluster():
|
|||||||
|
|
||||||
@mock_ecs
|
@mock_ecs
|
||||||
def test_list_clusters():
|
def test_list_clusters():
|
||||||
client = boto3.client('ecs', region_name='us-east-1')
|
client = boto3.client('ecs', region_name='us-east-2')
|
||||||
_ = client.create_cluster(
|
_ = client.create_cluster(
|
||||||
clusterName='test_cluster0'
|
clusterName='test_cluster0'
|
||||||
)
|
)
|
||||||
@ -43,9 +43,9 @@ def test_list_clusters():
|
|||||||
)
|
)
|
||||||
response = client.list_clusters()
|
response = client.list_clusters()
|
||||||
response['clusterArns'].should.contain(
|
response['clusterArns'].should.contain(
|
||||||
'arn:aws:ecs:us-east-1:012345678910:cluster/test_cluster0')
|
'arn:aws:ecs:us-east-2:012345678910:cluster/test_cluster0')
|
||||||
response['clusterArns'].should.contain(
|
response['clusterArns'].should.contain(
|
||||||
'arn:aws:ecs:us-east-1:012345678910:cluster/test_cluster1')
|
'arn:aws:ecs:us-east-2:012345678910:cluster/test_cluster1')
|
||||||
|
|
||||||
|
|
||||||
@mock_ecs
|
@mock_ecs
|
||||||
@ -2360,3 +2360,229 @@ def test_list_tags_for_resource_unknown():
|
|||||||
client.list_tags_for_resource(resourceArn=task_definition_arn)
|
client.list_tags_for_resource(resourceArn=task_definition_arn)
|
||||||
except ClientError as err:
|
except ClientError as err:
|
||||||
err.response['Error']['Code'].should.equal('ClientException')
|
err.response['Error']['Code'].should.equal('ClientException')
|
||||||
|
|
||||||
|
|
||||||
|
@mock_ecs
|
||||||
|
def test_list_tags_for_resource_ecs_service():
|
||||||
|
client = boto3.client('ecs', region_name='us-east-1')
|
||||||
|
_ = client.create_cluster(
|
||||||
|
clusterName='test_ecs_cluster'
|
||||||
|
)
|
||||||
|
_ = client.register_task_definition(
|
||||||
|
family='test_ecs_task',
|
||||||
|
containerDefinitions=[
|
||||||
|
{
|
||||||
|
'name': 'hello_world',
|
||||||
|
'image': 'docker/hello-world:latest',
|
||||||
|
'cpu': 1024,
|
||||||
|
'memory': 400,
|
||||||
|
'essential': True,
|
||||||
|
'environment': [{
|
||||||
|
'name': 'AWS_ACCESS_KEY_ID',
|
||||||
|
'value': 'SOME_ACCESS_KEY'
|
||||||
|
}],
|
||||||
|
'logConfiguration': {'logDriver': 'json-file'}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
)
|
||||||
|
response = client.create_service(
|
||||||
|
cluster='test_ecs_cluster',
|
||||||
|
serviceName='test_ecs_service',
|
||||||
|
taskDefinition='test_ecs_task',
|
||||||
|
desiredCount=2,
|
||||||
|
tags=[
|
||||||
|
{'key': 'createdBy', 'value': 'moto-unittest'},
|
||||||
|
{'key': 'foo', 'value': 'bar'},
|
||||||
|
]
|
||||||
|
)
|
||||||
|
response = client.list_tags_for_resource(resourceArn=response['service']['serviceArn'])
|
||||||
|
type(response['tags']).should.be(list)
|
||||||
|
response['tags'].should.equal([
|
||||||
|
{'key': 'createdBy', 'value': 'moto-unittest'},
|
||||||
|
{'key': 'foo', 'value': 'bar'},
|
||||||
|
])
|
||||||
|
|
||||||
|
|
||||||
|
@mock_ecs
|
||||||
|
def test_list_tags_for_resource_unknown_service():
|
||||||
|
client = boto3.client('ecs', region_name='us-east-1')
|
||||||
|
service_arn = 'arn:aws:ecs:us-east-1:012345678910:service/unknown:1'
|
||||||
|
try:
|
||||||
|
client.list_tags_for_resource(resourceArn=service_arn)
|
||||||
|
except ClientError as err:
|
||||||
|
err.response['Error']['Code'].should.equal('ServiceNotFoundException')
|
||||||
|
|
||||||
|
|
||||||
|
@mock_ecs
|
||||||
|
def test_ecs_service_tag_resource():
|
||||||
|
client = boto3.client('ecs', region_name='us-east-1')
|
||||||
|
_ = client.create_cluster(
|
||||||
|
clusterName='test_ecs_cluster'
|
||||||
|
)
|
||||||
|
_ = client.register_task_definition(
|
||||||
|
family='test_ecs_task',
|
||||||
|
containerDefinitions=[
|
||||||
|
{
|
||||||
|
'name': 'hello_world',
|
||||||
|
'image': 'docker/hello-world:latest',
|
||||||
|
'cpu': 1024,
|
||||||
|
'memory': 400,
|
||||||
|
'essential': True,
|
||||||
|
'environment': [{
|
||||||
|
'name': 'AWS_ACCESS_KEY_ID',
|
||||||
|
'value': 'SOME_ACCESS_KEY'
|
||||||
|
}],
|
||||||
|
'logConfiguration': {'logDriver': 'json-file'}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
)
|
||||||
|
response = client.create_service(
|
||||||
|
cluster='test_ecs_cluster',
|
||||||
|
serviceName='test_ecs_service',
|
||||||
|
taskDefinition='test_ecs_task',
|
||||||
|
desiredCount=2
|
||||||
|
)
|
||||||
|
client.tag_resource(
|
||||||
|
resourceArn=response['service']['serviceArn'],
|
||||||
|
tags=[
|
||||||
|
{'key': 'createdBy', 'value': 'moto-unittest'},
|
||||||
|
{'key': 'foo', 'value': 'bar'},
|
||||||
|
]
|
||||||
|
)
|
||||||
|
response = client.list_tags_for_resource(resourceArn=response['service']['serviceArn'])
|
||||||
|
type(response['tags']).should.be(list)
|
||||||
|
response['tags'].should.equal([
|
||||||
|
{'key': 'createdBy', 'value': 'moto-unittest'},
|
||||||
|
{'key': 'foo', 'value': 'bar'},
|
||||||
|
])
|
||||||
|
|
||||||
|
|
||||||
|
@mock_ecs
|
||||||
|
def test_ecs_service_tag_resource_overwrites_tag():
|
||||||
|
client = boto3.client('ecs', region_name='us-east-1')
|
||||||
|
_ = client.create_cluster(
|
||||||
|
clusterName='test_ecs_cluster'
|
||||||
|
)
|
||||||
|
_ = client.register_task_definition(
|
||||||
|
family='test_ecs_task',
|
||||||
|
containerDefinitions=[
|
||||||
|
{
|
||||||
|
'name': 'hello_world',
|
||||||
|
'image': 'docker/hello-world:latest',
|
||||||
|
'cpu': 1024,
|
||||||
|
'memory': 400,
|
||||||
|
'essential': True,
|
||||||
|
'environment': [{
|
||||||
|
'name': 'AWS_ACCESS_KEY_ID',
|
||||||
|
'value': 'SOME_ACCESS_KEY'
|
||||||
|
}],
|
||||||
|
'logConfiguration': {'logDriver': 'json-file'}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
)
|
||||||
|
response = client.create_service(
|
||||||
|
cluster='test_ecs_cluster',
|
||||||
|
serviceName='test_ecs_service',
|
||||||
|
taskDefinition='test_ecs_task',
|
||||||
|
desiredCount=2,
|
||||||
|
tags=[
|
||||||
|
{'key': 'foo', 'value': 'bar'},
|
||||||
|
]
|
||||||
|
)
|
||||||
|
client.tag_resource(
|
||||||
|
resourceArn=response['service']['serviceArn'],
|
||||||
|
tags=[
|
||||||
|
{'key': 'createdBy', 'value': 'moto-unittest'},
|
||||||
|
{'key': 'foo', 'value': 'hello world'},
|
||||||
|
]
|
||||||
|
)
|
||||||
|
response = client.list_tags_for_resource(resourceArn=response['service']['serviceArn'])
|
||||||
|
type(response['tags']).should.be(list)
|
||||||
|
response['tags'].should.equal([
|
||||||
|
{'key': 'createdBy', 'value': 'moto-unittest'},
|
||||||
|
{'key': 'foo', 'value': 'hello world'},
|
||||||
|
])
|
||||||
|
|
||||||
|
|
||||||
|
@mock_ecs
|
||||||
|
def test_ecs_service_untag_resource():
|
||||||
|
client = boto3.client('ecs', region_name='us-east-1')
|
||||||
|
_ = client.create_cluster(
|
||||||
|
clusterName='test_ecs_cluster'
|
||||||
|
)
|
||||||
|
_ = client.register_task_definition(
|
||||||
|
family='test_ecs_task',
|
||||||
|
containerDefinitions=[
|
||||||
|
{
|
||||||
|
'name': 'hello_world',
|
||||||
|
'image': 'docker/hello-world:latest',
|
||||||
|
'cpu': 1024,
|
||||||
|
'memory': 400,
|
||||||
|
'essential': True,
|
||||||
|
'environment': [{
|
||||||
|
'name': 'AWS_ACCESS_KEY_ID',
|
||||||
|
'value': 'SOME_ACCESS_KEY'
|
||||||
|
}],
|
||||||
|
'logConfiguration': {'logDriver': 'json-file'}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
)
|
||||||
|
response = client.create_service(
|
||||||
|
cluster='test_ecs_cluster',
|
||||||
|
serviceName='test_ecs_service',
|
||||||
|
taskDefinition='test_ecs_task',
|
||||||
|
desiredCount=2,
|
||||||
|
tags=[
|
||||||
|
{'key': 'foo', 'value': 'bar'},
|
||||||
|
]
|
||||||
|
)
|
||||||
|
client.untag_resource(
|
||||||
|
resourceArn=response['service']['serviceArn'],
|
||||||
|
tagKeys=['foo']
|
||||||
|
)
|
||||||
|
response = client.list_tags_for_resource(resourceArn=response['service']['serviceArn'])
|
||||||
|
response['tags'].should.equal([])
|
||||||
|
|
||||||
|
|
||||||
|
@mock_ecs
|
||||||
|
def test_ecs_service_untag_resource_multiple_tags():
|
||||||
|
client = boto3.client('ecs', region_name='us-east-1')
|
||||||
|
_ = client.create_cluster(
|
||||||
|
clusterName='test_ecs_cluster'
|
||||||
|
)
|
||||||
|
_ = client.register_task_definition(
|
||||||
|
family='test_ecs_task',
|
||||||
|
containerDefinitions=[
|
||||||
|
{
|
||||||
|
'name': 'hello_world',
|
||||||
|
'image': 'docker/hello-world:latest',
|
||||||
|
'cpu': 1024,
|
||||||
|
'memory': 400,
|
||||||
|
'essential': True,
|
||||||
|
'environment': [{
|
||||||
|
'name': 'AWS_ACCESS_KEY_ID',
|
||||||
|
'value': 'SOME_ACCESS_KEY'
|
||||||
|
}],
|
||||||
|
'logConfiguration': {'logDriver': 'json-file'}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
)
|
||||||
|
response = client.create_service(
|
||||||
|
cluster='test_ecs_cluster',
|
||||||
|
serviceName='test_ecs_service',
|
||||||
|
taskDefinition='test_ecs_task',
|
||||||
|
desiredCount=2,
|
||||||
|
tags=[
|
||||||
|
{'key': 'foo', 'value': 'bar'},
|
||||||
|
{'key': 'createdBy', 'value': 'moto-unittest'},
|
||||||
|
{'key': 'hello', 'value': 'world'},
|
||||||
|
]
|
||||||
|
)
|
||||||
|
client.untag_resource(
|
||||||
|
resourceArn=response['service']['serviceArn'],
|
||||||
|
tagKeys=['foo', 'createdBy']
|
||||||
|
)
|
||||||
|
response = client.list_tags_for_resource(resourceArn=response['service']['serviceArn'])
|
||||||
|
response['tags'].should.equal([
|
||||||
|
{'key': 'hello', 'value': 'world'},
|
||||||
|
])
|
||||||
|
@ -87,7 +87,7 @@ def test_describe_rule():
|
|||||||
|
|
||||||
assert(response is not None)
|
assert(response is not None)
|
||||||
assert(response.get('Name') == rule_name)
|
assert(response.get('Name') == rule_name)
|
||||||
assert(response.get('Arn') is not None)
|
assert(response.get('Arn') == 'arn:aws:events:us-west-2:111111111111:rule/{0}'.format(rule_name))
|
||||||
|
|
||||||
|
|
||||||
@mock_events
|
@mock_events
|
||||||
|
@ -225,3 +225,65 @@ def test_get_log_events():
|
|||||||
for i in range(10):
|
for i in range(10):
|
||||||
resp['events'][i]['timestamp'].should.equal(i)
|
resp['events'][i]['timestamp'].should.equal(i)
|
||||||
resp['events'][i]['message'].should.equal(str(i))
|
resp['events'][i]['message'].should.equal(str(i))
|
||||||
|
|
||||||
|
|
||||||
|
@mock_logs
|
||||||
|
def test_list_tags_log_group():
|
||||||
|
conn = boto3.client('logs', 'us-west-2')
|
||||||
|
log_group_name = 'dummy'
|
||||||
|
tags = {'tag_key_1': 'tag_value_1', 'tag_key_2': 'tag_value_2'}
|
||||||
|
|
||||||
|
response = conn.create_log_group(logGroupName=log_group_name)
|
||||||
|
response = conn.list_tags_log_group(logGroupName=log_group_name)
|
||||||
|
assert response['tags'] == {}
|
||||||
|
|
||||||
|
response = conn.delete_log_group(logGroupName=log_group_name)
|
||||||
|
response = conn.create_log_group(logGroupName=log_group_name, tags=tags)
|
||||||
|
response = conn.list_tags_log_group(logGroupName=log_group_name)
|
||||||
|
assert response['tags'] == tags
|
||||||
|
|
||||||
|
response = conn.delete_log_group(logGroupName=log_group_name)
|
||||||
|
|
||||||
|
|
||||||
|
@mock_logs
|
||||||
|
def test_tag_log_group():
|
||||||
|
conn = boto3.client('logs', 'us-west-2')
|
||||||
|
log_group_name = 'dummy'
|
||||||
|
tags = {'tag_key_1': 'tag_value_1'}
|
||||||
|
response = conn.create_log_group(logGroupName=log_group_name)
|
||||||
|
|
||||||
|
response = conn.tag_log_group(logGroupName=log_group_name, tags=tags)
|
||||||
|
response = conn.list_tags_log_group(logGroupName=log_group_name)
|
||||||
|
assert response['tags'] == tags
|
||||||
|
|
||||||
|
tags_with_added_value = {'tag_key_1': 'tag_value_1', 'tag_key_2': 'tag_value_2'}
|
||||||
|
response = conn.tag_log_group(logGroupName=log_group_name, tags={'tag_key_2': 'tag_value_2'})
|
||||||
|
response = conn.list_tags_log_group(logGroupName=log_group_name)
|
||||||
|
assert response['tags'] == tags_with_added_value
|
||||||
|
|
||||||
|
tags_with_updated_value = {'tag_key_1': 'tag_value_XX', 'tag_key_2': 'tag_value_2'}
|
||||||
|
response = conn.tag_log_group(logGroupName=log_group_name, tags={'tag_key_1': 'tag_value_XX'})
|
||||||
|
response = conn.list_tags_log_group(logGroupName=log_group_name)
|
||||||
|
assert response['tags'] == tags_with_updated_value
|
||||||
|
|
||||||
|
response = conn.delete_log_group(logGroupName=log_group_name)
|
||||||
|
|
||||||
|
|
||||||
|
@mock_logs
|
||||||
|
def test_untag_log_group():
|
||||||
|
conn = boto3.client('logs', 'us-west-2')
|
||||||
|
log_group_name = 'dummy'
|
||||||
|
response = conn.create_log_group(logGroupName=log_group_name)
|
||||||
|
|
||||||
|
tags = {'tag_key_1': 'tag_value_1', 'tag_key_2': 'tag_value_2'}
|
||||||
|
response = conn.tag_log_group(logGroupName=log_group_name, tags=tags)
|
||||||
|
response = conn.list_tags_log_group(logGroupName=log_group_name)
|
||||||
|
assert response['tags'] == tags
|
||||||
|
|
||||||
|
tags_to_remove = ['tag_key_1']
|
||||||
|
remaining_tags = {'tag_key_2': 'tag_value_2'}
|
||||||
|
response = conn.untag_log_group(logGroupName=log_group_name, tags=tags_to_remove)
|
||||||
|
response = conn.list_tags_log_group(logGroupName=log_group_name)
|
||||||
|
assert response['tags'] == remaining_tags
|
||||||
|
|
||||||
|
response = conn.delete_log_group(logGroupName=log_group_name)
|
||||||
|
@ -32,6 +32,7 @@ import sure # noqa
|
|||||||
|
|
||||||
from moto import settings, mock_s3, mock_s3_deprecated
|
from moto import settings, mock_s3, mock_s3_deprecated
|
||||||
import moto.s3.models as s3model
|
import moto.s3.models as s3model
|
||||||
|
from moto.core.exceptions import InvalidNextTokenException
|
||||||
|
|
||||||
if settings.TEST_SERVER_MODE:
|
if settings.TEST_SERVER_MODE:
|
||||||
REDUCED_PART_SIZE = s3model.UPLOAD_PART_MIN_SIZE
|
REDUCED_PART_SIZE = s3model.UPLOAD_PART_MIN_SIZE
|
||||||
@ -273,6 +274,7 @@ def test_multipart_invalid_order():
|
|||||||
bucket.complete_multipart_upload.when.called_with(
|
bucket.complete_multipart_upload.when.called_with(
|
||||||
multipart.key_name, multipart.id, xml).should.throw(S3ResponseError)
|
multipart.key_name, multipart.id, xml).should.throw(S3ResponseError)
|
||||||
|
|
||||||
|
|
||||||
@mock_s3_deprecated
|
@mock_s3_deprecated
|
||||||
@reduced_min_part_size
|
@reduced_min_part_size
|
||||||
def test_multipart_etag_quotes_stripped():
|
def test_multipart_etag_quotes_stripped():
|
||||||
@ -297,6 +299,7 @@ def test_multipart_etag_quotes_stripped():
|
|||||||
# we should get both parts as the key contents
|
# we should get both parts as the key contents
|
||||||
bucket.get_key("the-key").etag.should.equal(EXPECTED_ETAG)
|
bucket.get_key("the-key").etag.should.equal(EXPECTED_ETAG)
|
||||||
|
|
||||||
|
|
||||||
@mock_s3_deprecated
|
@mock_s3_deprecated
|
||||||
@reduced_min_part_size
|
@reduced_min_part_size
|
||||||
def test_multipart_duplicate_upload():
|
def test_multipart_duplicate_upload():
|
||||||
@ -421,18 +424,22 @@ def test_copy_key():
|
|||||||
"new-key").get_contents_as_string().should.equal(b"some value")
|
"new-key").get_contents_as_string().should.equal(b"some value")
|
||||||
|
|
||||||
|
|
||||||
|
@parameterized([
|
||||||
|
("the-unicode-💩-key",),
|
||||||
|
("key-with?question-mark",),
|
||||||
|
])
|
||||||
@mock_s3_deprecated
|
@mock_s3_deprecated
|
||||||
def test_copy_key_with_unicode():
|
def test_copy_key_with_special_chars(key_name):
|
||||||
conn = boto.connect_s3('the_key', 'the_secret')
|
conn = boto.connect_s3('the_key', 'the_secret')
|
||||||
bucket = conn.create_bucket("foobar")
|
bucket = conn.create_bucket("foobar")
|
||||||
key = Key(bucket)
|
key = Key(bucket)
|
||||||
key.key = "the-unicode-💩-key"
|
key.key = key_name
|
||||||
key.set_contents_from_string("some value")
|
key.set_contents_from_string("some value")
|
||||||
|
|
||||||
bucket.copy_key('new-key', 'foobar', 'the-unicode-💩-key')
|
bucket.copy_key('new-key', 'foobar', key_name)
|
||||||
|
|
||||||
bucket.get_key(
|
bucket.get_key(
|
||||||
"the-unicode-💩-key").get_contents_as_string().should.equal(b"some value")
|
key_name).get_contents_as_string().should.equal(b"some value")
|
||||||
bucket.get_key(
|
bucket.get_key(
|
||||||
"new-key").get_contents_as_string().should.equal(b"some value")
|
"new-key").get_contents_as_string().should.equal(b"some value")
|
||||||
|
|
||||||
@ -666,6 +673,7 @@ def test_delete_keys_invalid():
|
|||||||
result.deleted.should.have.length_of(0)
|
result.deleted.should.have.length_of(0)
|
||||||
result.errors.should.have.length_of(0)
|
result.errors.should.have.length_of(0)
|
||||||
|
|
||||||
|
|
||||||
@mock_s3
|
@mock_s3
|
||||||
def test_boto3_delete_empty_keys_list():
|
def test_boto3_delete_empty_keys_list():
|
||||||
with assert_raises(ClientError) as err:
|
with assert_raises(ClientError) as err:
|
||||||
@ -1640,6 +1648,7 @@ def test_boto3_delete_versioned_bucket():
|
|||||||
|
|
||||||
client.delete_bucket(Bucket='blah')
|
client.delete_bucket(Bucket='blah')
|
||||||
|
|
||||||
|
|
||||||
@mock_s3
|
@mock_s3
|
||||||
def test_boto3_get_object_if_modified_since():
|
def test_boto3_get_object_if_modified_since():
|
||||||
s3 = boto3.client('s3', region_name='us-east-1')
|
s3 = boto3.client('s3', region_name='us-east-1')
|
||||||
@ -1663,6 +1672,7 @@ def test_boto3_get_object_if_modified_since():
|
|||||||
e = err.exception
|
e = err.exception
|
||||||
e.response['Error'].should.equal({'Code': '304', 'Message': 'Not Modified'})
|
e.response['Error'].should.equal({'Code': '304', 'Message': 'Not Modified'})
|
||||||
|
|
||||||
|
|
||||||
@mock_s3
|
@mock_s3
|
||||||
def test_boto3_head_object_if_modified_since():
|
def test_boto3_head_object_if_modified_since():
|
||||||
s3 = boto3.client('s3', region_name='us-east-1')
|
s3 = boto3.client('s3', region_name='us-east-1')
|
||||||
@ -1830,6 +1840,7 @@ def test_boto3_put_bucket_tagging():
|
|||||||
e.response["Error"]["Code"].should.equal("InvalidTag")
|
e.response["Error"]["Code"].should.equal("InvalidTag")
|
||||||
e.response["Error"]["Message"].should.equal("Cannot provide multiple Tags with the same key")
|
e.response["Error"]["Message"].should.equal("Cannot provide multiple Tags with the same key")
|
||||||
|
|
||||||
|
|
||||||
@mock_s3
|
@mock_s3
|
||||||
def test_boto3_get_bucket_tagging():
|
def test_boto3_get_bucket_tagging():
|
||||||
s3 = boto3.client("s3", region_name="us-east-1")
|
s3 = boto3.client("s3", region_name="us-east-1")
|
||||||
@ -2730,6 +2741,7 @@ def test_boto3_list_object_versions_with_versioning_enabled_late():
|
|||||||
response = s3.get_object(Bucket=bucket_name, Key=key)
|
response = s3.get_object(Bucket=bucket_name, Key=key)
|
||||||
response['Body'].read().should.equal(items[-1])
|
response['Body'].read().should.equal(items[-1])
|
||||||
|
|
||||||
|
|
||||||
@mock_s3
|
@mock_s3
|
||||||
def test_boto3_bad_prefix_list_object_versions():
|
def test_boto3_bad_prefix_list_object_versions():
|
||||||
s3 = boto3.client('s3', region_name='us-east-1')
|
s3 = boto3.client('s3', region_name='us-east-1')
|
||||||
@ -2932,6 +2944,7 @@ TEST_XML = """\
|
|||||||
</ns0:WebsiteConfiguration>
|
</ns0:WebsiteConfiguration>
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
|
||||||
@mock_s3
|
@mock_s3
|
||||||
def test_boto3_bucket_name_too_long():
|
def test_boto3_bucket_name_too_long():
|
||||||
s3 = boto3.client('s3', region_name='us-east-1')
|
s3 = boto3.client('s3', region_name='us-east-1')
|
||||||
@ -2939,6 +2952,7 @@ def test_boto3_bucket_name_too_long():
|
|||||||
s3.create_bucket(Bucket='x'*64)
|
s3.create_bucket(Bucket='x'*64)
|
||||||
exc.exception.response['Error']['Code'].should.equal('InvalidBucketName')
|
exc.exception.response['Error']['Code'].should.equal('InvalidBucketName')
|
||||||
|
|
||||||
|
|
||||||
@mock_s3
|
@mock_s3
|
||||||
def test_boto3_bucket_name_too_short():
|
def test_boto3_bucket_name_too_short():
|
||||||
s3 = boto3.client('s3', region_name='us-east-1')
|
s3 = boto3.client('s3', region_name='us-east-1')
|
||||||
@ -2946,6 +2960,7 @@ def test_boto3_bucket_name_too_short():
|
|||||||
s3.create_bucket(Bucket='x'*2)
|
s3.create_bucket(Bucket='x'*2)
|
||||||
exc.exception.response['Error']['Code'].should.equal('InvalidBucketName')
|
exc.exception.response['Error']['Code'].should.equal('InvalidBucketName')
|
||||||
|
|
||||||
|
|
||||||
@mock_s3
|
@mock_s3
|
||||||
def test_accelerated_none_when_unspecified():
|
def test_accelerated_none_when_unspecified():
|
||||||
bucket_name = 'some_bucket'
|
bucket_name = 'some_bucket'
|
||||||
@ -2954,6 +2969,7 @@ def test_accelerated_none_when_unspecified():
|
|||||||
resp = s3.get_bucket_accelerate_configuration(Bucket=bucket_name)
|
resp = s3.get_bucket_accelerate_configuration(Bucket=bucket_name)
|
||||||
resp.shouldnt.have.key('Status')
|
resp.shouldnt.have.key('Status')
|
||||||
|
|
||||||
|
|
||||||
@mock_s3
|
@mock_s3
|
||||||
def test_can_enable_bucket_acceleration():
|
def test_can_enable_bucket_acceleration():
|
||||||
bucket_name = 'some_bucket'
|
bucket_name = 'some_bucket'
|
||||||
@ -2968,6 +2984,7 @@ def test_can_enable_bucket_acceleration():
|
|||||||
resp.should.have.key('Status')
|
resp.should.have.key('Status')
|
||||||
resp['Status'].should.equal('Enabled')
|
resp['Status'].should.equal('Enabled')
|
||||||
|
|
||||||
|
|
||||||
@mock_s3
|
@mock_s3
|
||||||
def test_can_suspend_bucket_acceleration():
|
def test_can_suspend_bucket_acceleration():
|
||||||
bucket_name = 'some_bucket'
|
bucket_name = 'some_bucket'
|
||||||
@ -2986,6 +3003,7 @@ def test_can_suspend_bucket_acceleration():
|
|||||||
resp.should.have.key('Status')
|
resp.should.have.key('Status')
|
||||||
resp['Status'].should.equal('Suspended')
|
resp['Status'].should.equal('Suspended')
|
||||||
|
|
||||||
|
|
||||||
@mock_s3
|
@mock_s3
|
||||||
def test_suspending_acceleration_on_not_configured_bucket_does_nothing():
|
def test_suspending_acceleration_on_not_configured_bucket_does_nothing():
|
||||||
bucket_name = 'some_bucket'
|
bucket_name = 'some_bucket'
|
||||||
@ -2999,6 +3017,7 @@ def test_suspending_acceleration_on_not_configured_bucket_does_nothing():
|
|||||||
resp = s3.get_bucket_accelerate_configuration(Bucket=bucket_name)
|
resp = s3.get_bucket_accelerate_configuration(Bucket=bucket_name)
|
||||||
resp.shouldnt.have.key('Status')
|
resp.shouldnt.have.key('Status')
|
||||||
|
|
||||||
|
|
||||||
@mock_s3
|
@mock_s3
|
||||||
def test_accelerate_configuration_status_validation():
|
def test_accelerate_configuration_status_validation():
|
||||||
bucket_name = 'some_bucket'
|
bucket_name = 'some_bucket'
|
||||||
@ -3011,6 +3030,7 @@ def test_accelerate_configuration_status_validation():
|
|||||||
)
|
)
|
||||||
exc.exception.response['Error']['Code'].should.equal('MalformedXML')
|
exc.exception.response['Error']['Code'].should.equal('MalformedXML')
|
||||||
|
|
||||||
|
|
||||||
@mock_s3
|
@mock_s3
|
||||||
def test_accelerate_configuration_is_not_supported_when_bucket_name_has_dots():
|
def test_accelerate_configuration_is_not_supported_when_bucket_name_has_dots():
|
||||||
bucket_name = 'some.bucket.with.dots'
|
bucket_name = 'some.bucket.with.dots'
|
||||||
@ -3023,6 +3043,7 @@ def test_accelerate_configuration_is_not_supported_when_bucket_name_has_dots():
|
|||||||
)
|
)
|
||||||
exc.exception.response['Error']['Code'].should.equal('InvalidRequest')
|
exc.exception.response['Error']['Code'].should.equal('InvalidRequest')
|
||||||
|
|
||||||
|
|
||||||
def store_and_read_back_a_key(key):
|
def store_and_read_back_a_key(key):
|
||||||
s3 = boto3.client('s3', region_name='us-east-1')
|
s3 = boto3.client('s3', region_name='us-east-1')
|
||||||
bucket_name = 'mybucket'
|
bucket_name = 'mybucket'
|
||||||
@ -3038,10 +3059,12 @@ def store_and_read_back_a_key(key):
|
|||||||
response = s3.get_object(Bucket=bucket_name, Key=key)
|
response = s3.get_object(Bucket=bucket_name, Key=key)
|
||||||
response['Body'].read().should.equal(body)
|
response['Body'].read().should.equal(body)
|
||||||
|
|
||||||
|
|
||||||
@mock_s3
|
@mock_s3
|
||||||
def test_paths_with_leading_slashes_work():
|
def test_paths_with_leading_slashes_work():
|
||||||
store_and_read_back_a_key('/a-key')
|
store_and_read_back_a_key('/a-key')
|
||||||
|
|
||||||
|
|
||||||
@mock_s3
|
@mock_s3
|
||||||
def test_root_dir_with_empty_name_works():
|
def test_root_dir_with_empty_name_works():
|
||||||
if os.environ.get('TEST_SERVER_MODE', 'false').lower() == 'true':
|
if os.environ.get('TEST_SERVER_MODE', 'false').lower() == 'true':
|
||||||
@ -3083,3 +3106,70 @@ def test_delete_objects_with_url_encoded_key(key):
|
|||||||
s3.delete_objects(Bucket=bucket_name, Delete={'Objects': [{'Key': key}]})
|
s3.delete_objects(Bucket=bucket_name, Delete={'Objects': [{'Key': key}]})
|
||||||
assert_deleted()
|
assert_deleted()
|
||||||
|
|
||||||
|
|
||||||
|
@mock_s3
|
||||||
|
def test_list_config_discovered_resources():
|
||||||
|
from moto.s3.config import s3_config_query
|
||||||
|
|
||||||
|
# Without any buckets:
|
||||||
|
assert s3_config_query.list_config_service_resources("global", "global", None, None, 100, None) == ([], None)
|
||||||
|
|
||||||
|
# With 10 buckets in us-west-2:
|
||||||
|
for x in range(0, 10):
|
||||||
|
s3_config_query.backends['global'].create_bucket('bucket{}'.format(x), 'us-west-2')
|
||||||
|
|
||||||
|
# With 2 buckets in eu-west-1:
|
||||||
|
for x in range(10, 12):
|
||||||
|
s3_config_query.backends['global'].create_bucket('eu-bucket{}'.format(x), 'eu-west-1')
|
||||||
|
|
||||||
|
result, next_token = s3_config_query.list_config_service_resources(None, None, 100, None)
|
||||||
|
assert not next_token
|
||||||
|
assert len(result) == 12
|
||||||
|
for x in range(0, 10):
|
||||||
|
assert result[x] == {
|
||||||
|
'type': 'AWS::S3::Bucket',
|
||||||
|
'id': 'bucket{}'.format(x),
|
||||||
|
'name': 'bucket{}'.format(x),
|
||||||
|
'region': 'us-west-2'
|
||||||
|
}
|
||||||
|
for x in range(10, 12):
|
||||||
|
assert result[x] == {
|
||||||
|
'type': 'AWS::S3::Bucket',
|
||||||
|
'id': 'eu-bucket{}'.format(x),
|
||||||
|
'name': 'eu-bucket{}'.format(x),
|
||||||
|
'region': 'eu-west-1'
|
||||||
|
}
|
||||||
|
|
||||||
|
# With a name:
|
||||||
|
result, next_token = s3_config_query.list_config_service_resources(None, 'bucket0', 100, None)
|
||||||
|
assert len(result) == 1 and result[0]['name'] == 'bucket0' and not next_token
|
||||||
|
|
||||||
|
# With a region:
|
||||||
|
result, next_token = s3_config_query.list_config_service_resources(None, None, 100, None, resource_region='eu-west-1')
|
||||||
|
assert len(result) == 2 and not next_token and result[1]['name'] == 'eu-bucket11'
|
||||||
|
|
||||||
|
# With resource ids:
|
||||||
|
result, next_token = s3_config_query.list_config_service_resources(['bucket0', 'bucket1'], None, 100, None)
|
||||||
|
assert len(result) == 2 and result[0]['name'] == 'bucket0' and result[1]['name'] == 'bucket1' and not next_token
|
||||||
|
|
||||||
|
# With duplicated resource ids:
|
||||||
|
result, next_token = s3_config_query.list_config_service_resources(['bucket0', 'bucket0'], None, 100, None)
|
||||||
|
assert len(result) == 1 and result[0]['name'] == 'bucket0' and not next_token
|
||||||
|
|
||||||
|
# Pagination:
|
||||||
|
result, next_token = s3_config_query.list_config_service_resources(None, None, 1, None)
|
||||||
|
assert len(result) == 1 and result[0]['name'] == 'bucket0' and next_token == 'bucket1'
|
||||||
|
|
||||||
|
# Last Page:
|
||||||
|
result, next_token = s3_config_query.list_config_service_resources(None, None, 1, 'eu-bucket11', resource_region='eu-west-1')
|
||||||
|
assert len(result) == 1 and result[0]['name'] == 'eu-bucket11' and not next_token
|
||||||
|
|
||||||
|
# With a list of buckets:
|
||||||
|
result, next_token = s3_config_query.list_config_service_resources(['bucket0', 'bucket1'], None, 1, None)
|
||||||
|
assert len(result) == 1 and result[0]['name'] == 'bucket0' and next_token == 'bucket1'
|
||||||
|
|
||||||
|
# With an invalid page:
|
||||||
|
with assert_raises(InvalidNextTokenException) as inte:
|
||||||
|
s3_config_query.list_config_service_resources(None, None, 1, 'notabucket')
|
||||||
|
|
||||||
|
assert 'The nextToken provided is invalid' in inte.exception.message
|
||||||
|
@ -80,6 +80,37 @@ def test_send_email():
|
|||||||
sent_count.should.equal(3)
|
sent_count.should.equal(3)
|
||||||
|
|
||||||
|
|
||||||
|
@mock_ses
|
||||||
|
def test_send_templated_email():
|
||||||
|
conn = boto3.client('ses', region_name='us-east-1')
|
||||||
|
|
||||||
|
kwargs = dict(
|
||||||
|
Source="test@example.com",
|
||||||
|
Destination={
|
||||||
|
"ToAddresses": ["test_to@example.com"],
|
||||||
|
"CcAddresses": ["test_cc@example.com"],
|
||||||
|
"BccAddresses": ["test_bcc@example.com"],
|
||||||
|
},
|
||||||
|
Template="test_template",
|
||||||
|
TemplateData='{\"name\": \"test\"}'
|
||||||
|
)
|
||||||
|
|
||||||
|
conn.send_templated_email.when.called_with(
|
||||||
|
**kwargs).should.throw(ClientError)
|
||||||
|
|
||||||
|
conn.verify_domain_identity(Domain='example.com')
|
||||||
|
conn.send_templated_email(**kwargs)
|
||||||
|
|
||||||
|
too_many_addresses = list('to%s@example.com' % i for i in range(51))
|
||||||
|
conn.send_templated_email.when.called_with(
|
||||||
|
**dict(kwargs, Destination={'ToAddresses': too_many_addresses})
|
||||||
|
).should.throw(ClientError)
|
||||||
|
|
||||||
|
send_quota = conn.get_send_quota()
|
||||||
|
sent_count = int(send_quota['SentLast24Hours'])
|
||||||
|
sent_count.should.equal(3)
|
||||||
|
|
||||||
|
|
||||||
@mock_ses
|
@mock_ses
|
||||||
def test_send_html_email():
|
def test_send_html_email():
|
||||||
conn = boto3.client('ses', region_name='us-east-1')
|
conn = boto3.client('ses', region_name='us-east-1')
|
||||||
|
@ -78,7 +78,7 @@ def test_state_machine_creation_requires_valid_role_arn():
|
|||||||
with assert_raises(ClientError) as exc:
|
with assert_raises(ClientError) as exc:
|
||||||
client.create_state_machine(name=name,
|
client.create_state_machine(name=name,
|
||||||
definition=str(simple_definition),
|
definition=str(simple_definition),
|
||||||
roleArn='arn:aws:iam:1234:role/unknown_role')
|
roleArn='arn:aws:iam::1234:role/unknown_role')
|
||||||
|
|
||||||
|
|
||||||
@mock_stepfunctions
|
@mock_stepfunctions
|
||||||
@ -243,11 +243,26 @@ def test_state_machine_start_execution():
|
|||||||
execution = client.start_execution(stateMachineArn=sm['stateMachineArn'])
|
execution = client.start_execution(stateMachineArn=sm['stateMachineArn'])
|
||||||
#
|
#
|
||||||
execution['ResponseMetadata']['HTTPStatusCode'].should.equal(200)
|
execution['ResponseMetadata']['HTTPStatusCode'].should.equal(200)
|
||||||
expected_exec_name = 'arn:aws:states:' + region + ':' + _get_account_id() + ':execution:name:[a-zA-Z0-9-]+'
|
uuid_regex = '[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}'
|
||||||
|
expected_exec_name = 'arn:aws:states:' + region + ':' + _get_account_id() + ':execution:name:' + uuid_regex
|
||||||
execution['executionArn'].should.match(expected_exec_name)
|
execution['executionArn'].should.match(expected_exec_name)
|
||||||
execution['startDate'].should.be.a(datetime)
|
execution['startDate'].should.be.a(datetime)
|
||||||
|
|
||||||
|
|
||||||
|
@mock_stepfunctions
|
||||||
|
@mock_sts
|
||||||
|
def test_state_machine_start_execution_with_custom_name():
|
||||||
|
client = boto3.client('stepfunctions', region_name=region)
|
||||||
|
#
|
||||||
|
sm = client.create_state_machine(name='name', definition=str(simple_definition), roleArn=_get_default_role())
|
||||||
|
execution = client.start_execution(stateMachineArn=sm['stateMachineArn'], name='execution_name')
|
||||||
|
#
|
||||||
|
execution['ResponseMetadata']['HTTPStatusCode'].should.equal(200)
|
||||||
|
expected_exec_name = 'arn:aws:states:' + region + ':' + _get_account_id() + ':execution:name:execution_name'
|
||||||
|
execution['executionArn'].should.equal(expected_exec_name)
|
||||||
|
execution['startDate'].should.be.a(datetime)
|
||||||
|
|
||||||
|
|
||||||
@mock_stepfunctions
|
@mock_stepfunctions
|
||||||
@mock_sts
|
@mock_sts
|
||||||
def test_state_machine_list_executions():
|
def test_state_machine_list_executions():
|
||||||
@ -375,4 +390,4 @@ def _get_account_id():
|
|||||||
|
|
||||||
|
|
||||||
def _get_default_role():
|
def _get_default_role():
|
||||||
return 'arn:aws:iam:' + _get_account_id() + ':role/unknown_sf_role'
|
return 'arn:aws:iam::' + _get_account_id() + ':role/unknown_sf_role'
|
||||||
|
Loading…
x
Reference in New Issue
Block a user