Adding support for querying AWS Config for supported configurations.

At this time, only adding support for S3.
This commit is contained in:
Mike Grima 2019-09-23 17:16:20 -07:00
parent 4497f18c1a
commit c4b310d7a5
10 changed files with 675 additions and 2 deletions

107
CONFIG_README.md Normal file
View File

@ -0,0 +1,107 @@
# AWS Config Querying Support in Moto
An experimental feature for AWS Config has been developed to provide AWS Config capabilities in your unit tests.
This feature is experimental as there are many services that are not yet supported and will require the community to add them in
over time. This page details how the feature works and how you can use it.
## What is this and why would I use this?
AWS Config is an AWS service that describes your AWS resource types and can track their changes over time. At this time, moto does not
have support for handling the configuration history changes, but it does have a few methods mocked out that can be immensely useful
for unit testing.
If you are developing automation that needs to pull against AWS Config, then this will help you write tests that can simulate your
code in production.
## How does this work?
The AWS Config capabilities in moto work by examining the state of resources that are created within moto, and then returning that data
in the way that AWS Config would return it (sans history). This will work by querying all of the moto backends (regions) for a given
resource type.
However, this will only work on resource types that have this enabled.
### Current enabled resource types:
1. S3
## Developer Guide
There are several pieces to this for adding new capabilities to moto:
1. Listing resources
1. Describing resources
For both, there are a number of pre-requisites:
### Base Components
In the `moto/core/models.py` file is a class named `ConfigQueryModel`. This is a base class that keeps track of all the
resource type backends.
At a minimum, resource types that have this enabled will have:
1. A `config.py` file that will import the resource type backends (from the `__init__.py`)
1. In the resource's `config.py`, an implementation of the `ConfigQueryModel` class with logic unique to the resource type
1. An instantiation of the `ConfigQueryModel`
1. In the `moto/config/models.py` file, import the `ConfigQueryModel` instantiation, and update `RESOURCE_MAP` to have a mapping of the AWS Config resource type
to the instantiation on the previous step (just imported).
An example of the above is implemented for S3. You can see that by looking at:
1. `moto/s3/config.py`
1. `moto/config/models.py`
As well as the corresponding unit tests in:
1. `tests/s3/test_s3.py`
1. `tests/config/test_config.py`
Note for unit testing, you will want to add a test to ensure that you can query all the resources effectively. For testing this feature,
the unit tests for the `ConfigQueryModel` will not make use of `boto` to create resources, such as S3 buckets. You will need to use the
backend model methods to provision the resources. This is to make tests compatible with the moto server. You should absolutely make tests
in the resource type to test listing and object fetching.
### Listing
S3 is currently the model implementation, but it also odd in that S3 is a global resource type with regional resource residency.
But for most resource types the following is true:
1. There are regional backends with their own sets of data
1. Config aggregation can pull data from any backend region -- we assume that everything lives in the same account
Implementing the listing capability will be different for each resource type. At a minimum, you will need to return a `List` of `Dict`s
that look like this:
```python
[
{
'type': 'AWS::The AWS Config data type',
'name': 'The name of the resource',
'id': 'The ID of the resource',
'region': 'The region of the resource -- if global, then you may want to have the calling logic pass in the
aggregator region in for the resource region -- or just us-east-1 :P'
}
, ...
]
```
It's recommended to read the comment for the `ConfigQueryModel` [base class here](moto/core/models.py).
^^ The AWS Config code will see this and format it correct for both aggregated and non-aggregated calls.
#### General implementation tips
The aggregation and non-aggregation querying can and should just use the same overall logic. The differences are:
1. Non-aggregated listing will specify the region-name of the resource backend `backend_region`
1. Aggregated listing will need to be able to list resource types across ALL backends and filter optionally by passing in `resource_region`.
An example of a working implementation of this is [S3](moto/s3/config.py).
Pagination should generally be able to pull out the resource across any region so should be sharded by `region-item-name` -- not done for S3
because S3 has a globally unique name space.
### Describing Resources
TODO: Need to fill this in when it's implemented

View File

@ -297,6 +297,9 @@ def test_describe_instances_allowed():
See [the related test suite](https://github.com/spulec/moto/blob/master/tests/test_core/test_auth.py) for more examples.
## Experimental: AWS Config Querying
For details about the experimental AWS Config support please see the [AWS Config readme here](CONFIG_README.md).
## Very Important -- Recommended Usage
There are some important caveats to be aware of when using moto:

View File

@ -230,3 +230,27 @@ class TooManyTags(JsonRESTError):
super(TooManyTags, self).__init__(
'ValidationException', "1 validation error detected: Value '{}' at '{}' failed to satisfy "
"constraint: Member must have length less than or equal to 50.".format(tags, param))
class InvalidResourceParameters(JsonRESTError):
code = 400
def __init__(self):
super(InvalidResourceParameters, self).__init__('ValidationException', 'Both Resource ID and Resource Name '
'cannot be specified in the request')
class InvalidLimit(JsonRESTError):
code = 400
def __init__(self, value):
super(InvalidLimit, self).__init__('ValidationException', 'Value \'{value}\' at \'limit\' failed to satisify constraint: Member'
' must have value less than or equal to 100'.format(value=value))
class TooManyResourceIds(JsonRESTError):
code = 400
def __init__(self):
super(TooManyResourceIds, self).__init__('ValidationException', "The specified list had more than 20 resource ID's. "
"It must have '20' or less items")

View File

@ -17,11 +17,12 @@ from moto.config.exceptions import InvalidResourceTypeException, InvalidDelivery
InvalidSNSTopicARNException, MaxNumberOfDeliveryChannelsExceededException, NoAvailableDeliveryChannelException, \
NoSuchDeliveryChannelException, LastDeliveryChannelDeleteFailedException, TagKeyTooBig, \
TooManyTags, TagValueTooBig, TooManyAccountSources, InvalidParameterValueException, InvalidNextTokenException, \
NoSuchConfigurationAggregatorException, InvalidTagCharacters, DuplicateTags
NoSuchConfigurationAggregatorException, InvalidTagCharacters, DuplicateTags, InvalidLimit, InvalidResourceParameters, TooManyResourceIds
from moto.core import BaseBackend, BaseModel
from moto.s3.config import s3_config_query
DEFAULT_ACCOUNT_ID = 123456789012
DEFAULT_ACCOUNT_ID = '123456789012'
POP_STRINGS = [
'capitalizeStart',
'CapitalizeStart',
@ -32,6 +33,11 @@ POP_STRINGS = [
]
DEFAULT_PAGE_SIZE = 100
# Map the Config resource type to a backend:
RESOURCE_MAP = {
'AWS::S3::Bucket': s3_config_query
}
def datetime2int(date):
return int(time.mktime(date.timetuple()))
@ -680,6 +686,110 @@ class ConfigBackend(BaseBackend):
del self.delivery_channels[channel_name]
def list_discovered_resources(self, resource_type, backend_region, resource_ids, resource_name, limit, next_token):
"""This will query against the mocked AWS Config listing function that must exist for the resource backend.
:param resource_type:
:param backend_region:
:param ids:
:param name:
:param limit:
:param next_token:
:return:
"""
identifiers = []
new_token = None
limit = limit or DEFAULT_PAGE_SIZE
if limit > DEFAULT_PAGE_SIZE:
raise InvalidLimit(limit)
if resource_ids and resource_name:
raise InvalidResourceParameters()
# Only 20 maximum Resource IDs:
if resource_ids and len(resource_ids) > 20:
raise TooManyResourceIds()
# If the resource type exists and the backend region is implemented in moto, then
# call upon the resource type's Config Query class to retrieve the list of resources that match the criteria:
if RESOURCE_MAP.get(resource_type, {}):
# Is this a global resource type? -- if so, re-write the region to 'global':
if RESOURCE_MAP[resource_type].backends.get('global'):
backend_region = 'global'
# For non-aggregated queries, the we only care about the backend_region. Need to verify that moto has implemented
# the region for the given backend:
if RESOURCE_MAP[resource_type].backends.get(backend_region):
# Fetch the resources for the backend's region:
identifiers, new_token = \
RESOURCE_MAP[resource_type].list_config_service_resources(resource_ids, resource_name, limit, next_token)
result = {'resourceIdentifiers': [
{
'resourceType': identifier['type'],
'resourceId': identifier['id'],
'resourceName': identifier['name']
}
for identifier in identifiers]
}
if new_token:
result['nextToken'] = new_token
return result
def list_aggregate_discovered_resources(self, aggregator_name, resource_type, filters, limit, next_token):
"""This will query against the mocked AWS Config listing function that must exist for the resource backend.
As far a moto goes -- the only real difference between this function and the `list_discovered_resources` function is that
this will require a Config Aggregator be set up a priori and can search based on resource regions.
:param aggregator_name:
:param resource_type:
:param filters:
:param limit:
:param next_token:
:return:
"""
if not self.config_aggregators.get(aggregator_name):
raise NoSuchConfigurationAggregatorException()
identifiers = []
new_token = None
filters = filters or {}
limit = limit or DEFAULT_PAGE_SIZE
if limit > DEFAULT_PAGE_SIZE:
raise InvalidLimit(limit)
# If the resource type exists and the backend region is implemented in moto, then
# call upon the resource type's Config Query class to retrieve the list of resources that match the criteria:
if RESOURCE_MAP.get(resource_type, {}):
# We only care about a filter's Region, Resource Name, and Resource ID:
resource_region = filters.get('Region')
resource_id = [filters['ResourceId']] if filters.get('ResourceId') else None
resource_name = filters.get('ResourceName')
identifiers, new_token = \
RESOURCE_MAP[resource_type].list_config_service_resources(resource_id, resource_name, limit, next_token,
resource_region=resource_region)
result = {'ResourceIdentifiers': [
{
'SourceAccountId': DEFAULT_ACCOUNT_ID,
'SourceRegion': identifier['region'],
'ResourceType': identifier['type'],
'ResourceId': identifier['id'],
'ResourceName': identifier['name']
}
for identifier in identifiers]
}
if new_token:
result['NextToken'] = new_token
return result
config_backends = {}
boto3_session = Session()

View File

@ -84,3 +84,34 @@ class ConfigResponse(BaseResponse):
def stop_configuration_recorder(self):
self.config_backend.stop_configuration_recorder(self._get_param('ConfigurationRecorderName'))
return ""
def list_discovered_resources(self):
schema = self.config_backend.list_discovered_resources(self._get_param('resourceType'),
self.region,
self._get_param('resourceIds'),
self._get_param('resourceName'),
self._get_param('limit'),
self._get_param('nextToken'))
return json.dumps(schema)
def list_aggregate_discovered_resources(self):
schema = self.config_backend.list_aggregate_discovered_resources(self._get_param('ConfigurationAggregatorName'),
self._get_param('ResourceType'),
self._get_param('Filters'),
self._get_param('Limit'),
self._get_param('NextToken'))
return json.dumps(schema)
"""
def batch_get_resource_config(self):
# TODO implement me!
return ""
def batch_get_aggregate_resource_config(self):
# TODO implement me!
return ""
def get_resource_config_history(self):
# TODO implement me!
return ""
"""

View File

@ -104,3 +104,11 @@ class AuthFailureError(RESTError):
super(AuthFailureError, self).__init__(
'AuthFailure',
"AWS was not able to validate the provided access credentials")
class InvalidNextTokenException(JsonRESTError):
"""For AWS Config resource listing. This will be used by many different resource types, and so it is in moto.core."""
code = 400
def __init__(self):
super(InvalidNextTokenException, self).__init__('InvalidNextTokenException', 'The nextToken provided is invalid')

View File

@ -538,6 +538,65 @@ class BaseBackend(object):
else:
return HttprettyMockAWS({'global': self})
# def list_config_service_resources(self, resource_ids, resource_name, limit, next_token):
# """For AWS Config. This will list all of the resources of the given type and optional resource name and region"""
# raise NotImplementedError()
class ConfigQueryModel(object):
def __init__(self, backends):
"""Inits based on the resource type's backends (1 for each region if applicable)"""
self.backends = backends
def list_config_service_resources(self, resource_ids, resource_name, limit, next_token, backend_region=None, resource_region=None):
"""For AWS Config. This will list all of the resources of the given type and optional resource name and region.
This supports both aggregated and non-aggregated listing. The following notes the difference:
- Non Aggregated Listing -
This only lists resources within a region. The way that this is implemented in moto is based on the region
for the resource backend.
You must set the `backend_region` to the region that the API request arrived from. resource_region can be set to `None`.
- Aggregated Listing -
This lists resources from all potential regional backends. For non-global resource types, this should collect a full
list of resources from all the backends, and then be able to filter from the resource region. This is because an
aggregator can aggregate resources from multiple regions. In moto, aggregated regions will *assume full aggregation
from all resources in all regions for a given resource type*.
The `backend_region` should be set to `None` for these queries, and the `resource_region` should optionally be set to
the `Filters` region parameter to filter out resources that reside in a specific region.
For aggregated listings, pagination logic should be set such that the next page can properly span all the region backends.
As such, the proper way to implement is to first obtain a full list of results from all the region backends, and then filter
from there. It may be valuable to make this a concatenation of the region and resource name.
:param resource_region:
:param resource_ids:
:param resource_name:
:param limit:
:param next_token:
:param backend_region: The region for the backend to pull results from. Set to `None` if this is an aggregated query.
:return: This should return a list of Dicts that have the following fields:
[
{
'type': 'AWS::The AWS Config data type',
'name': 'The name of the resource',
'id': 'The ID of the resource',
'region': 'The region of the resource -- if global, then you may want to have the calling logic pass in the
aggregator region in for the resource region -- or just us-east-1 :P'
}
, ...
]
"""
raise NotImplementedError()
def get_config_resource(self):
"""TODO implement me."""
raise NotImplementedError()
class base_decorator(object):
mock_backend = MockAWS

70
moto/s3/config.py Normal file
View File

@ -0,0 +1,70 @@
from moto.core.exceptions import InvalidNextTokenException
from moto.core.models import ConfigQueryModel
from moto.s3 import s3_backends
class S3ConfigQuery(ConfigQueryModel):
def list_config_service_resources(self, resource_ids, resource_name, limit, next_token, backend_region=None, resource_region=None):
# S3 need not care about "backend_region" as S3 is global. The resource_region only matters for aggregated queries as you can
# filter on bucket regions for them. For other resource types, you would need to iterate appropriately for the backend_region.
# Resource IDs are the same as S3 bucket names
# For aggregation -- did we get both a resource ID and a resource name?
if resource_ids and resource_name:
# If the values are different, then return an empty list:
if resource_name not in resource_ids:
return [], None
# If no filter was passed in for resource names/ids then return them all:
if not resource_ids and not resource_name:
bucket_list = list(self.backends['global'].buckets.keys())
else:
# Match the resource name / ID:
bucket_list = []
filter_buckets = [resource_name] if resource_name else resource_ids
for bucket in self.backends['global'].buckets.keys():
if bucket in filter_buckets:
bucket_list.append(bucket)
# If a resource_region was supplied (aggregated only), then filter on bucket region too:
if resource_region:
region_buckets = []
for bucket in bucket_list:
if self.backends['global'].buckets[bucket].region_name == resource_region:
region_buckets.append(bucket)
bucket_list = region_buckets
if not bucket_list:
return [], None
# Pagination logic:
sorted_buckets = sorted(bucket_list)
new_token = None
# Get the start:
if not next_token:
start = 0
else:
# Tokens for this moto feature is just the bucket name:
# For OTHER non-global resource types, it's the region concatenated with the resource ID.
if next_token not in sorted_buckets:
raise InvalidNextTokenException()
start = sorted_buckets.index(next_token)
# Get the list of items to collect:
bucket_list = sorted_buckets[start:(start + limit)]
if len(sorted_buckets) > (start + limit):
new_token = sorted_buckets[start + limit]
return [{'type': 'AWS::S3::Bucket', 'id': bucket, 'name': bucket, 'region': self.backends['global'].buckets[bucket].region_name}
for bucket in bucket_list], new_token
s3_config_query = S3ConfigQuery(s3_backends)

View File

@ -4,6 +4,7 @@ import boto3
from botocore.exceptions import ClientError
from nose.tools import assert_raises
from moto import mock_s3
from moto.config import mock_config
@ -1009,3 +1010,177 @@ def test_delete_delivery_channel():
with assert_raises(ClientError) as ce:
client.delete_delivery_channel(DeliveryChannelName='testchannel')
assert ce.exception.response['Error']['Code'] == 'NoSuchDeliveryChannelException'
@mock_config
@mock_s3
def test_list_discovered_resource():
"""NOTE: We are only really testing the Config part. For each individual service, please add tests
for that individual service's "list_config_service_resources" function.
"""
client = boto3.client('config', region_name='us-west-2')
# With nothing created yet:
assert not client.list_discovered_resources(resourceType='AWS::S3::Bucket')['resourceIdentifiers']
# Create some S3 buckets:
s3_client = boto3.client('s3', region_name='us-west-2')
for x in range(0, 10):
s3_client.create_bucket(Bucket='bucket{}'.format(x), CreateBucketConfiguration={'LocationConstraint': 'us-west-2'})
# Now try:
result = client.list_discovered_resources(resourceType='AWS::S3::Bucket')
assert len(result['resourceIdentifiers']) == 10
for x in range(0, 10):
assert result['resourceIdentifiers'][x] == {
'resourceType': 'AWS::S3::Bucket',
'resourceId': 'bucket{}'.format(x),
'resourceName': 'bucket{}'.format(x)
}
assert not result.get('nextToken')
# Test that pagination places a proper nextToken in the response and also that the limit works:
result = client.list_discovered_resources(resourceType='AWS::S3::Bucket', limit=1, nextToken='bucket1')
assert len(result['resourceIdentifiers']) == 1
assert result['nextToken'] == 'bucket2'
# Try with a resource name:
result = client.list_discovered_resources(resourceType='AWS::S3::Bucket', limit=1, resourceName='bucket1')
assert len(result['resourceIdentifiers']) == 1
assert not result.get('nextToken')
# Try with a resource ID:
result = client.list_discovered_resources(resourceType='AWS::S3::Bucket', limit=1, resourceIds=['bucket1'])
assert len(result['resourceIdentifiers']) == 1
assert not result.get('nextToken')
# Try with duplicated resource IDs:
result = client.list_discovered_resources(resourceType='AWS::S3::Bucket', limit=1, resourceIds=['bucket1', 'bucket1'])
assert len(result['resourceIdentifiers']) == 1
assert not result.get('nextToken')
# Test with an invalid resource type:
assert not client.list_discovered_resources(resourceType='LOL::NOT::A::RESOURCE::TYPE')['resourceIdentifiers']
# Test with an invalid page num > 100:
with assert_raises(ClientError) as ce:
client.list_discovered_resources(resourceType='AWS::S3::Bucket', limit=101)
assert '101' in ce.exception.response['Error']['Message']
# Test by supplying both resourceName and also resourceIds:
with assert_raises(ClientError) as ce:
client.list_discovered_resources(resourceType='AWS::S3::Bucket', resourceName='whats', resourceIds=['up', 'doc'])
assert 'Both Resource ID and Resource Name cannot be specified in the request' in ce.exception.response['Error']['Message']
# More than 20 resourceIds:
resource_ids = ['{}'.format(x) for x in range(0, 21)]
with assert_raises(ClientError) as ce:
client.list_discovered_resources(resourceType='AWS::S3::Bucket', resourceIds=resource_ids)
assert 'The specified list had more than 20 resource ID\'s.' in ce.exception.response['Error']['Message']
@mock_config
@mock_s3
def test_list_aggregate_discovered_resource():
"""NOTE: We are only really testing the Config part. For each individual service, please add tests
for that individual service's "list_config_service_resources" function.
"""
client = boto3.client('config', region_name='us-west-2')
# Without an aggregator:
with assert_raises(ClientError) as ce:
client.list_aggregate_discovered_resources(ConfigurationAggregatorName='lolno', ResourceType='AWS::S3::Bucket')
assert 'The configuration aggregator does not exist' in ce.exception.response['Error']['Message']
# Create the aggregator:
account_aggregation_source = {
'AccountIds': [
'012345678910',
'111111111111',
'222222222222'
],
'AllAwsRegions': True
}
client.put_configuration_aggregator(
ConfigurationAggregatorName='testing',
AccountAggregationSources=[account_aggregation_source]
)
# With nothing created yet:
assert not client.list_aggregate_discovered_resources(ConfigurationAggregatorName='testing',
ResourceType='AWS::S3::Bucket')['ResourceIdentifiers']
# Create some S3 buckets:
s3_client = boto3.client('s3', region_name='us-west-2')
for x in range(0, 10):
s3_client.create_bucket(Bucket='bucket{}'.format(x), CreateBucketConfiguration={'LocationConstraint': 'us-west-2'})
s3_client_eu = boto3.client('s3', region_name='eu-west-1')
for x in range(10, 12):
s3_client_eu.create_bucket(Bucket='eu-bucket{}'.format(x), CreateBucketConfiguration={'LocationConstraint': 'eu-west-1'})
# Now try:
result = client.list_aggregate_discovered_resources(ConfigurationAggregatorName='testing', ResourceType='AWS::S3::Bucket')
assert len(result['ResourceIdentifiers']) == 12
for x in range(0, 10):
assert result['ResourceIdentifiers'][x] == {
'SourceAccountId': '123456789012',
'ResourceType': 'AWS::S3::Bucket',
'ResourceId': 'bucket{}'.format(x),
'ResourceName': 'bucket{}'.format(x),
'SourceRegion': 'us-west-2'
}
for x in range(11, 12):
assert result['ResourceIdentifiers'][x] == {
'SourceAccountId': '123456789012',
'ResourceType': 'AWS::S3::Bucket',
'ResourceId': 'eu-bucket{}'.format(x),
'ResourceName': 'eu-bucket{}'.format(x),
'SourceRegion': 'eu-west-1'
}
assert not result.get('NextToken')
# Test that pagination places a proper nextToken in the response and also that the limit works:
result = client.list_aggregate_discovered_resources(ConfigurationAggregatorName='testing', ResourceType='AWS::S3::Bucket',
Limit=1, NextToken='bucket1')
assert len(result['ResourceIdentifiers']) == 1
assert result['NextToken'] == 'bucket2'
# Try with a resource name:
result = client.list_aggregate_discovered_resources(ConfigurationAggregatorName='testing', ResourceType='AWS::S3::Bucket',
Limit=1, NextToken='bucket1', Filters={'ResourceName': 'bucket1'})
assert len(result['ResourceIdentifiers']) == 1
assert not result.get('NextToken')
# Try with a resource ID:
result = client.list_aggregate_discovered_resources(ConfigurationAggregatorName='testing', ResourceType='AWS::S3::Bucket',
Limit=1, NextToken='bucket1', Filters={'ResourceId': 'bucket1'})
assert len(result['ResourceIdentifiers']) == 1
assert not result.get('NextToken')
# Try with a region specified:
result = client.list_aggregate_discovered_resources(ConfigurationAggregatorName='testing', ResourceType='AWS::S3::Bucket',
Filters={'Region': 'eu-west-1'})
assert len(result['ResourceIdentifiers']) == 2
assert result['ResourceIdentifiers'][0]['SourceRegion'] == 'eu-west-1'
assert not result.get('NextToken')
# Try with both name and id set to the incorrect values:
assert not client.list_aggregate_discovered_resources(ConfigurationAggregatorName='testing', ResourceType='AWS::S3::Bucket',
Filters={'ResourceId': 'bucket1',
'ResourceName': 'bucket2'})['ResourceIdentifiers']
# Test with an invalid resource type:
assert not client.list_aggregate_discovered_resources(ConfigurationAggregatorName='testing',
ResourceType='LOL::NOT::A::RESOURCE::TYPE')['ResourceIdentifiers']
# Try with correct name but incorrect region:
assert not client.list_aggregate_discovered_resources(ConfigurationAggregatorName='testing', ResourceType='AWS::S3::Bucket',
Filters={'ResourceId': 'bucket1',
'Region': 'us-west-1'})['ResourceIdentifiers']
# Test with an invalid page num > 100:
with assert_raises(ClientError) as ce:
client.list_aggregate_discovered_resources(ConfigurationAggregatorName='testing', ResourceType='AWS::S3::Bucket', Limit=101)
assert '101' in ce.exception.response['Error']['Message']

View File

@ -32,6 +32,7 @@ import sure # noqa
from moto import settings, mock_s3, mock_s3_deprecated
import moto.s3.models as s3model
from moto.core.exceptions import InvalidNextTokenException
if settings.TEST_SERVER_MODE:
REDUCED_PART_SIZE = s3model.UPLOAD_PART_MIN_SIZE
@ -273,6 +274,7 @@ def test_multipart_invalid_order():
bucket.complete_multipart_upload.when.called_with(
multipart.key_name, multipart.id, xml).should.throw(S3ResponseError)
@mock_s3_deprecated
@reduced_min_part_size
def test_multipart_etag_quotes_stripped():
@ -297,6 +299,7 @@ def test_multipart_etag_quotes_stripped():
# we should get both parts as the key contents
bucket.get_key("the-key").etag.should.equal(EXPECTED_ETAG)
@mock_s3_deprecated
@reduced_min_part_size
def test_multipart_duplicate_upload():
@ -666,6 +669,7 @@ def test_delete_keys_invalid():
result.deleted.should.have.length_of(0)
result.errors.should.have.length_of(0)
@mock_s3
def test_boto3_delete_empty_keys_list():
with assert_raises(ClientError) as err:
@ -1640,6 +1644,7 @@ def test_boto3_delete_versioned_bucket():
client.delete_bucket(Bucket='blah')
@mock_s3
def test_boto3_get_object_if_modified_since():
s3 = boto3.client('s3', region_name='us-east-1')
@ -1663,6 +1668,7 @@ def test_boto3_get_object_if_modified_since():
e = err.exception
e.response['Error'].should.equal({'Code': '304', 'Message': 'Not Modified'})
@mock_s3
def test_boto3_head_object_if_modified_since():
s3 = boto3.client('s3', region_name='us-east-1')
@ -1830,6 +1836,7 @@ def test_boto3_put_bucket_tagging():
e.response["Error"]["Code"].should.equal("InvalidTag")
e.response["Error"]["Message"].should.equal("Cannot provide multiple Tags with the same key")
@mock_s3
def test_boto3_get_bucket_tagging():
s3 = boto3.client("s3", region_name="us-east-1")
@ -2730,6 +2737,7 @@ def test_boto3_list_object_versions_with_versioning_enabled_late():
response = s3.get_object(Bucket=bucket_name, Key=key)
response['Body'].read().should.equal(items[-1])
@mock_s3
def test_boto3_bad_prefix_list_object_versions():
s3 = boto3.client('s3', region_name='us-east-1')
@ -2932,6 +2940,7 @@ TEST_XML = """\
</ns0:WebsiteConfiguration>
"""
@mock_s3
def test_boto3_bucket_name_too_long():
s3 = boto3.client('s3', region_name='us-east-1')
@ -2939,6 +2948,7 @@ def test_boto3_bucket_name_too_long():
s3.create_bucket(Bucket='x'*64)
exc.exception.response['Error']['Code'].should.equal('InvalidBucketName')
@mock_s3
def test_boto3_bucket_name_too_short():
s3 = boto3.client('s3', region_name='us-east-1')
@ -2946,6 +2956,7 @@ def test_boto3_bucket_name_too_short():
s3.create_bucket(Bucket='x'*2)
exc.exception.response['Error']['Code'].should.equal('InvalidBucketName')
@mock_s3
def test_accelerated_none_when_unspecified():
bucket_name = 'some_bucket'
@ -2954,6 +2965,7 @@ def test_accelerated_none_when_unspecified():
resp = s3.get_bucket_accelerate_configuration(Bucket=bucket_name)
resp.shouldnt.have.key('Status')
@mock_s3
def test_can_enable_bucket_acceleration():
bucket_name = 'some_bucket'
@ -2968,6 +2980,7 @@ def test_can_enable_bucket_acceleration():
resp.should.have.key('Status')
resp['Status'].should.equal('Enabled')
@mock_s3
def test_can_suspend_bucket_acceleration():
bucket_name = 'some_bucket'
@ -2986,6 +2999,7 @@ def test_can_suspend_bucket_acceleration():
resp.should.have.key('Status')
resp['Status'].should.equal('Suspended')
@mock_s3
def test_suspending_acceleration_on_not_configured_bucket_does_nothing():
bucket_name = 'some_bucket'
@ -2999,6 +3013,7 @@ def test_suspending_acceleration_on_not_configured_bucket_does_nothing():
resp = s3.get_bucket_accelerate_configuration(Bucket=bucket_name)
resp.shouldnt.have.key('Status')
@mock_s3
def test_accelerate_configuration_status_validation():
bucket_name = 'some_bucket'
@ -3011,6 +3026,7 @@ def test_accelerate_configuration_status_validation():
)
exc.exception.response['Error']['Code'].should.equal('MalformedXML')
@mock_s3
def test_accelerate_configuration_is_not_supported_when_bucket_name_has_dots():
bucket_name = 'some.bucket.with.dots'
@ -3023,6 +3039,7 @@ def test_accelerate_configuration_is_not_supported_when_bucket_name_has_dots():
)
exc.exception.response['Error']['Code'].should.equal('InvalidRequest')
def store_and_read_back_a_key(key):
s3 = boto3.client('s3', region_name='us-east-1')
bucket_name = 'mybucket'
@ -3038,10 +3055,12 @@ def store_and_read_back_a_key(key):
response = s3.get_object(Bucket=bucket_name, Key=key)
response['Body'].read().should.equal(body)
@mock_s3
def test_paths_with_leading_slashes_work():
store_and_read_back_a_key('/a-key')
@mock_s3
def test_root_dir_with_empty_name_works():
if os.environ.get('TEST_SERVER_MODE', 'false').lower() == 'true':
@ -3083,3 +3102,70 @@ def test_delete_objects_with_url_encoded_key(key):
s3.delete_objects(Bucket=bucket_name, Delete={'Objects': [{'Key': key}]})
assert_deleted()
@mock_s3
def test_list_config_discovered_resources():
from moto.s3.config import s3_config_query
# Without any buckets:
assert s3_config_query.list_config_service_resources("global", "global", None, None, 100, None) == ([], None)
# With 10 buckets in us-west-2:
for x in range(0, 10):
s3_config_query.backends['global'].create_bucket('bucket{}'.format(x), 'us-west-2')
# With 2 buckets in eu-west-1:
for x in range(10, 12):
s3_config_query.backends['global'].create_bucket('eu-bucket{}'.format(x), 'eu-west-1')
result, next_token = s3_config_query.list_config_service_resources(None, None, 100, None)
assert not next_token
assert len(result) == 12
for x in range(0, 10):
assert result[x] == {
'type': 'AWS::S3::Bucket',
'id': 'bucket{}'.format(x),
'name': 'bucket{}'.format(x),
'region': 'us-west-2'
}
for x in range(10, 12):
assert result[x] == {
'type': 'AWS::S3::Bucket',
'id': 'eu-bucket{}'.format(x),
'name': 'eu-bucket{}'.format(x),
'region': 'eu-west-1'
}
# With a name:
result, next_token = s3_config_query.list_config_service_resources(None, 'bucket0', 100, None)
assert len(result) == 1 and result[0]['name'] == 'bucket0' and not next_token
# With a region:
result, next_token = s3_config_query.list_config_service_resources(None, None, 100, None, resource_region='eu-west-1')
assert len(result) == 2 and not next_token and result[1]['name'] == 'eu-bucket11'
# With resource ids:
result, next_token = s3_config_query.list_config_service_resources(['bucket0', 'bucket1'], None, 100, None)
assert len(result) == 2 and result[0]['name'] == 'bucket0' and result[1]['name'] == 'bucket1' and not next_token
# With duplicated resource ids:
result, next_token = s3_config_query.list_config_service_resources(['bucket0', 'bucket0'], None, 100, None)
assert len(result) == 1 and result[0]['name'] == 'bucket0' and not next_token
# Pagination:
result, next_token = s3_config_query.list_config_service_resources(None, None, 1, None)
assert len(result) == 1 and result[0]['name'] == 'bucket0' and next_token == 'bucket1'
# Last Page:
result, next_token = s3_config_query.list_config_service_resources(None, None, 1, 'eu-bucket11', resource_region='eu-west-1')
assert len(result) == 1 and result[0]['name'] == 'eu-bucket11' and not next_token
# With a list of buckets:
result, next_token = s3_config_query.list_config_service_resources(['bucket0', 'bucket1'], None, 1, None)
assert len(result) == 1 and result[0]['name'] == 'bucket0' and next_token == 'bucket1'
# With an invalid page:
with assert_raises(InvalidNextTokenException) as inte:
s3_config_query.list_config_service_resources(None, None, 1, 'notabucket')
assert 'The nextToken provided is invalid' in inte.exception.message