Merge branch 'master' into support-iterator-type-at-after-sequence
This commit is contained in:
commit
a068a56972
1
.gitignore
vendored
1
.gitignore
vendored
@ -15,6 +15,7 @@ python_env
|
||||
.ropeproject/
|
||||
.pytest_cache/
|
||||
venv/
|
||||
env/
|
||||
.python-version
|
||||
.vscode/
|
||||
tests/file.tmp
|
||||
|
File diff suppressed because it is too large
Load Diff
91
README.md
91
README.md
@ -78,6 +78,7 @@ It gets even better! Moto isn't just for Python code and it isn't just for S3. L
|
||||
| Cognito Identity Provider | @mock_cognitoidp | basic endpoints done |
|
||||
|-------------------------------------------------------------------------------------|
|
||||
| Config | @mock_config | basic endpoints done |
|
||||
| | | core endpoints done |
|
||||
|-------------------------------------------------------------------------------------|
|
||||
| Data Pipeline | @mock_datapipeline | basic endpoints done |
|
||||
|-------------------------------------------------------------------------------------|
|
||||
@ -296,6 +297,96 @@ def test_describe_instances_allowed():
|
||||
|
||||
See [the related test suite](https://github.com/spulec/moto/blob/master/tests/test_core/test_auth.py) for more examples.
|
||||
|
||||
## Very Important -- Recommended Usage
|
||||
There are some important caveats to be aware of when using moto:
|
||||
|
||||
*Failure to follow these guidelines could result in your tests mutating your __REAL__ infrastructure!*
|
||||
|
||||
### How do I avoid tests from mutating my real infrastructure?
|
||||
You need to ensure that the mocks are actually in place. Changes made to recent versions of `botocore`
|
||||
have altered some of the mock behavior. In short, you need to ensure that you _always_ do the following:
|
||||
|
||||
1. Ensure that your tests have dummy environment variables set up:
|
||||
|
||||
export AWS_ACCESS_KEY_ID='testing'
|
||||
export AWS_SECRET_ACCESS_KEY='testing'
|
||||
export AWS_SECURITY_TOKEN='testing'
|
||||
export AWS_SESSION_TOKEN='testing'
|
||||
|
||||
1. __VERY IMPORTANT__: ensure that you have your mocks set up __BEFORE__ your `boto3` client is established.
|
||||
This can typically happen if you import a module that has a `boto3` client instantiated outside of a function.
|
||||
See the pesky imports section below on how to work around this.
|
||||
|
||||
### Example on usage?
|
||||
If you are a user of [pytest](https://pytest.org/en/latest/), you can leverage [pytest fixtures](https://pytest.org/en/latest/fixture.html#fixture)
|
||||
to help set up your mocks and other AWS resources that you would need.
|
||||
|
||||
Here is an example:
|
||||
```python
|
||||
@pytest.fixture(scope='function')
|
||||
def aws_credentials():
|
||||
"""Mocked AWS Credentials for moto."""
|
||||
os.environ['AWS_ACCESS_KEY_ID'] = 'testing'
|
||||
os.environ['AWS_SECRET_ACCESS_KEY'] = 'testing'
|
||||
os.environ['AWS_SECURITY_TOKEN'] = 'testing'
|
||||
os.environ['AWS_SESSION_TOKEN'] = 'testing'
|
||||
|
||||
@pytest.fixture(scope='function')
|
||||
def s3(aws_credentials):
|
||||
with mock_s3():
|
||||
yield boto3.client('s3', region_name='us-east-1')
|
||||
|
||||
|
||||
@pytest.fixture(scope='function')
|
||||
def sts(aws_credentials):
|
||||
with mock_sts():
|
||||
yield boto3.client('sts', region_name='us-east-1')
|
||||
|
||||
|
||||
@pytest.fixture(scope='function')
|
||||
def cloudwatch(aws_credentials):
|
||||
with mock_cloudwatch():
|
||||
yield boto3.client('cloudwatch', region_name='us-east-1')
|
||||
|
||||
... etc.
|
||||
```
|
||||
|
||||
In the code sample above, all of the AWS/mocked fixtures take in a parameter of `aws_credentials`,
|
||||
which sets the proper fake environment variables. The fake environment variables are used so that `botocore` doesn't try to locate real
|
||||
credentials on your system.
|
||||
|
||||
Next, once you need to do anything with the mocked AWS environment, do something like:
|
||||
```python
|
||||
def test_create_bucket(s3):
|
||||
# s3 is a fixture defined above that yields a boto3 s3 client.
|
||||
# Feel free to instantiate another boto3 S3 client -- Keep note of the region though.
|
||||
s3.create_bucket(Bucket="somebucket")
|
||||
|
||||
result = s3.list_buckets()
|
||||
assert len(result['Buckets']) == 1
|
||||
assert result['Buckets'][0]['Name'] == 'somebucket'
|
||||
```
|
||||
|
||||
### What about those pesky imports?
|
||||
Recall earlier, it was mentioned that mocks should be established __BEFORE__ the clients are set up. One way
|
||||
to avoid import issues is to make use of local Python imports -- i.e. import the module inside of the unit
|
||||
test you want to run vs. importing at the top of the file.
|
||||
|
||||
Example:
|
||||
```python
|
||||
def test_something(s3):
|
||||
from some.package.that.does.something.with.s3 import some_func # <-- Local import for unit test
|
||||
# ^^ Importing here ensures that the mock has been established.
|
||||
|
||||
sume_func() # The mock has been established from the "s3" pytest fixture, so this function that uses
|
||||
# a package-level S3 client will properly use the mock and not reach out to AWS.
|
||||
```
|
||||
|
||||
### Other caveats
|
||||
For Tox, Travis CI, and other build systems, you might need to also perform a `touch ~/.aws/credentials`
|
||||
command before running the tests. As long as that file is present (empty preferably) and the environment
|
||||
variables above are set, you should be good to go.
|
||||
|
||||
## Stand-alone Server Mode
|
||||
|
||||
Moto also has a stand-alone server mode. This allows you to utilize
|
||||
|
@ -105,7 +105,7 @@ class CertBundle(BaseModel):
|
||||
self.arn = arn
|
||||
|
||||
@classmethod
|
||||
def generate_cert(cls, domain_name, sans=None):
|
||||
def generate_cert(cls, domain_name, region, sans=None):
|
||||
if sans is None:
|
||||
sans = set()
|
||||
else:
|
||||
@ -152,7 +152,7 @@ class CertBundle(BaseModel):
|
||||
encryption_algorithm=serialization.NoEncryption()
|
||||
)
|
||||
|
||||
return cls(cert_armored, private_key, cert_type='AMAZON_ISSUED', cert_status='PENDING_VALIDATION')
|
||||
return cls(cert_armored, private_key, cert_type='AMAZON_ISSUED', cert_status='PENDING_VALIDATION', region=region)
|
||||
|
||||
def validate_pk(self):
|
||||
try:
|
||||
@ -325,7 +325,7 @@ class AWSCertificateManagerBackend(BaseBackend):
|
||||
|
||||
return bundle.arn
|
||||
|
||||
def get_certificates_list(self):
|
||||
def get_certificates_list(self, statuses):
|
||||
"""
|
||||
Get list of certificates
|
||||
|
||||
@ -333,7 +333,9 @@ class AWSCertificateManagerBackend(BaseBackend):
|
||||
:rtype: list of CertBundle
|
||||
"""
|
||||
for arn in self._certificates.keys():
|
||||
yield self.get_certificate(arn)
|
||||
cert = self.get_certificate(arn)
|
||||
if not statuses or cert.status in statuses:
|
||||
yield cert
|
||||
|
||||
def get_certificate(self, arn):
|
||||
if arn not in self._certificates:
|
||||
@ -355,7 +357,7 @@ class AWSCertificateManagerBackend(BaseBackend):
|
||||
if arn is not None:
|
||||
return arn
|
||||
|
||||
cert = CertBundle.generate_cert(domain_name, subject_alt_names)
|
||||
cert = CertBundle.generate_cert(domain_name, region=self.region, sans=subject_alt_names)
|
||||
if idempotency_token is not None:
|
||||
self._set_idempotency_token_arn(idempotency_token, cert.arn)
|
||||
self._certificates[cert.arn] = cert
|
||||
|
@ -132,8 +132,8 @@ class AWSCertificateManagerResponse(BaseResponse):
|
||||
|
||||
def list_certificates(self):
|
||||
certs = []
|
||||
|
||||
for cert_bundle in self.acm_backend.get_certificates_list():
|
||||
statuses = self._get_param('CertificateStatuses')
|
||||
for cert_bundle in self.acm_backend.get_certificates_list(statuses):
|
||||
certs.append({
|
||||
'CertificateArn': cert_bundle.arn,
|
||||
'DomainName': cert_bundle.common_name
|
||||
|
@ -309,6 +309,25 @@ class ApiKey(BaseModel, dict):
|
||||
self['createdDate'] = self['lastUpdatedDate'] = int(time.time())
|
||||
self['stageKeys'] = stageKeys
|
||||
|
||||
def update_operations(self, patch_operations):
|
||||
for op in patch_operations:
|
||||
if op['op'] == 'replace':
|
||||
if '/name' in op['path']:
|
||||
self['name'] = op['value']
|
||||
elif '/customerId' in op['path']:
|
||||
self['customerId'] = op['value']
|
||||
elif '/description' in op['path']:
|
||||
self['description'] = op['value']
|
||||
elif '/enabled' in op['path']:
|
||||
self['enabled'] = self._str2bool(op['value'])
|
||||
else:
|
||||
raise Exception(
|
||||
'Patch operation "%s" not implemented' % op['op'])
|
||||
return self
|
||||
|
||||
def _str2bool(self, v):
|
||||
return v.lower() == "true"
|
||||
|
||||
|
||||
class UsagePlan(BaseModel, dict):
|
||||
|
||||
@ -599,6 +618,10 @@ class APIGatewayBackend(BaseBackend):
|
||||
def get_apikey(self, api_key_id):
|
||||
return self.keys[api_key_id]
|
||||
|
||||
def update_apikey(self, api_key_id, patch_operations):
|
||||
key = self.keys[api_key_id]
|
||||
return key.update_operations(patch_operations)
|
||||
|
||||
def delete_apikey(self, api_key_id):
|
||||
self.keys.pop(api_key_id)
|
||||
return {}
|
||||
|
@ -245,6 +245,9 @@ class APIGatewayResponse(BaseResponse):
|
||||
|
||||
if self.method == 'GET':
|
||||
apikey_response = self.backend.get_apikey(apikey)
|
||||
elif self.method == 'PATCH':
|
||||
patch_operations = self._get_param('patchOperations')
|
||||
apikey_response = self.backend.update_apikey(apikey, patch_operations)
|
||||
elif self.method == 'DELETE':
|
||||
apikey_response = self.backend.delete_apikey(apikey)
|
||||
return 200, {}, json.dumps(apikey_response)
|
||||
|
@ -1,6 +1,7 @@
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import base64
|
||||
import time
|
||||
from collections import defaultdict
|
||||
import copy
|
||||
import datetime
|
||||
@ -31,6 +32,7 @@ from moto.logs.models import logs_backends
|
||||
from moto.s3.exceptions import MissingBucket, MissingKey
|
||||
from moto import settings
|
||||
from .utils import make_function_arn, make_function_ver_arn
|
||||
from moto.sqs import sqs_backends
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@ -429,24 +431,59 @@ class LambdaFunction(BaseModel):
|
||||
class EventSourceMapping(BaseModel):
|
||||
def __init__(self, spec):
|
||||
# required
|
||||
self.function_name = spec['FunctionName']
|
||||
self.function_arn = spec['FunctionArn']
|
||||
self.event_source_arn = spec['EventSourceArn']
|
||||
self.starting_position = spec['StartingPosition']
|
||||
self.uuid = str(uuid.uuid4())
|
||||
self.last_modified = time.mktime(datetime.datetime.utcnow().timetuple())
|
||||
|
||||
# BatchSize service default/max mapping
|
||||
batch_size_map = {
|
||||
'kinesis': (100, 10000),
|
||||
'dynamodb': (100, 1000),
|
||||
'sqs': (10, 10),
|
||||
}
|
||||
source_type = self.event_source_arn.split(":")[2].lower()
|
||||
batch_size_entry = batch_size_map.get(source_type)
|
||||
if batch_size_entry:
|
||||
# Use service default if not provided
|
||||
batch_size = int(spec.get('BatchSize', batch_size_entry[0]))
|
||||
if batch_size > batch_size_entry[1]:
|
||||
raise ValueError("InvalidParameterValueException",
|
||||
"BatchSize {} exceeds the max of {}".format(batch_size, batch_size_entry[1]))
|
||||
else:
|
||||
self.batch_size = batch_size
|
||||
else:
|
||||
raise ValueError("InvalidParameterValueException",
|
||||
"Unsupported event source type")
|
||||
|
||||
# optional
|
||||
self.batch_size = spec.get('BatchSize', 100)
|
||||
self.starting_position = spec.get('StartingPosition', 'TRIM_HORIZON')
|
||||
self.enabled = spec.get('Enabled', True)
|
||||
self.starting_position_timestamp = spec.get('StartingPositionTimestamp',
|
||||
None)
|
||||
|
||||
def get_configuration(self):
|
||||
return {
|
||||
'UUID': self.uuid,
|
||||
'BatchSize': self.batch_size,
|
||||
'EventSourceArn': self.event_source_arn,
|
||||
'FunctionArn': self.function_arn,
|
||||
'LastModified': self.last_modified,
|
||||
'LastProcessingResult': '',
|
||||
'State': 'Enabled' if self.enabled else 'Disabled',
|
||||
'StateTransitionReason': 'User initiated'
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def create_from_cloudformation_json(cls, resource_name, cloudformation_json,
|
||||
region_name):
|
||||
properties = cloudformation_json['Properties']
|
||||
func = lambda_backends[region_name].get_function(properties['FunctionName'])
|
||||
spec = {
|
||||
'FunctionName': properties['FunctionName'],
|
||||
'FunctionArn': func.function_arn,
|
||||
'EventSourceArn': properties['EventSourceArn'],
|
||||
'StartingPosition': properties['StartingPosition']
|
||||
'StartingPosition': properties['StartingPosition'],
|
||||
'BatchSize': properties.get('BatchSize', 100)
|
||||
}
|
||||
optional_properties = 'BatchSize Enabled StartingPositionTimestamp'.split()
|
||||
for prop in optional_properties:
|
||||
@ -466,8 +503,10 @@ class LambdaVersion(BaseModel):
|
||||
def create_from_cloudformation_json(cls, resource_name, cloudformation_json,
|
||||
region_name):
|
||||
properties = cloudformation_json['Properties']
|
||||
function_name = properties['FunctionName']
|
||||
func = lambda_backends[region_name].publish_function(function_name)
|
||||
spec = {
|
||||
'Version': properties.get('Version')
|
||||
'Version': func.version
|
||||
}
|
||||
return LambdaVersion(spec)
|
||||
|
||||
@ -515,6 +554,9 @@ class LambdaStorage(object):
|
||||
def get_arn(self, arn):
|
||||
return self._arns.get(arn, None)
|
||||
|
||||
def get_function_by_name_or_arn(self, input):
|
||||
return self.get_function(input) or self.get_arn(input)
|
||||
|
||||
def put_function(self, fn):
|
||||
"""
|
||||
:param fn: Function
|
||||
@ -596,6 +638,7 @@ class LambdaStorage(object):
|
||||
class LambdaBackend(BaseBackend):
|
||||
def __init__(self, region_name):
|
||||
self._lambdas = LambdaStorage()
|
||||
self._event_source_mappings = {}
|
||||
self.region_name = region_name
|
||||
|
||||
def reset(self):
|
||||
@ -617,6 +660,40 @@ class LambdaBackend(BaseBackend):
|
||||
fn.version = ver.version
|
||||
return fn
|
||||
|
||||
def create_event_source_mapping(self, spec):
|
||||
required = [
|
||||
'EventSourceArn',
|
||||
'FunctionName',
|
||||
]
|
||||
for param in required:
|
||||
if not spec.get(param):
|
||||
raise RESTError('InvalidParameterValueException', 'Missing {}'.format(param))
|
||||
|
||||
# Validate function name
|
||||
func = self._lambdas.get_function_by_name_or_arn(spec.pop('FunctionName', ''))
|
||||
if not func:
|
||||
raise RESTError('ResourceNotFoundException', 'Invalid FunctionName')
|
||||
|
||||
# Validate queue
|
||||
for queue in sqs_backends[self.region_name].queues.values():
|
||||
if queue.queue_arn == spec['EventSourceArn']:
|
||||
if queue.lambda_event_source_mappings.get('func.function_arn'):
|
||||
# TODO: Correct exception?
|
||||
raise RESTError('ResourceConflictException', 'The resource already exists.')
|
||||
if queue.fifo_queue:
|
||||
raise RESTError('InvalidParameterValueException',
|
||||
'{} is FIFO'.format(queue.queue_arn))
|
||||
else:
|
||||
spec.update({'FunctionArn': func.function_arn})
|
||||
esm = EventSourceMapping(spec)
|
||||
self._event_source_mappings[esm.uuid] = esm
|
||||
|
||||
# Set backend function on queue
|
||||
queue.lambda_event_source_mappings[esm.function_arn] = esm
|
||||
|
||||
return esm
|
||||
raise RESTError('ResourceNotFoundException', 'Invalid EventSourceArn')
|
||||
|
||||
def publish_function(self, function_name):
|
||||
return self._lambdas.publish_function(function_name)
|
||||
|
||||
@ -626,6 +703,33 @@ class LambdaBackend(BaseBackend):
|
||||
def list_versions_by_function(self, function_name):
|
||||
return self._lambdas.list_versions_by_function(function_name)
|
||||
|
||||
def get_event_source_mapping(self, uuid):
|
||||
return self._event_source_mappings.get(uuid)
|
||||
|
||||
def delete_event_source_mapping(self, uuid):
|
||||
return self._event_source_mappings.pop(uuid)
|
||||
|
||||
def update_event_source_mapping(self, uuid, spec):
|
||||
esm = self.get_event_source_mapping(uuid)
|
||||
if esm:
|
||||
if spec.get('FunctionName'):
|
||||
func = self._lambdas.get_function_by_name_or_arn(spec.get('FunctionName'))
|
||||
esm.function_arn = func.function_arn
|
||||
if 'BatchSize' in spec:
|
||||
esm.batch_size = spec['BatchSize']
|
||||
if 'Enabled' in spec:
|
||||
esm.enabled = spec['Enabled']
|
||||
return esm
|
||||
return False
|
||||
|
||||
def list_event_source_mappings(self, event_source_arn, function_name):
|
||||
esms = list(self._event_source_mappings.values())
|
||||
if event_source_arn:
|
||||
esms = list(filter(lambda x: x.event_source_arn == event_source_arn, esms))
|
||||
if function_name:
|
||||
esms = list(filter(lambda x: x.function_name == function_name, esms))
|
||||
return esms
|
||||
|
||||
def get_function_by_arn(self, function_arn):
|
||||
return self._lambdas.get_arn(function_arn)
|
||||
|
||||
@ -635,7 +739,43 @@ class LambdaBackend(BaseBackend):
|
||||
def list_functions(self):
|
||||
return self._lambdas.all()
|
||||
|
||||
def send_message(self, function_name, message, subject=None, qualifier=None):
|
||||
def send_sqs_batch(self, function_arn, messages, queue_arn):
|
||||
success = True
|
||||
for message in messages:
|
||||
func = self.get_function_by_arn(function_arn)
|
||||
result = self._send_sqs_message(func, message, queue_arn)
|
||||
if not result:
|
||||
success = False
|
||||
return success
|
||||
|
||||
def _send_sqs_message(self, func, message, queue_arn):
|
||||
event = {
|
||||
"Records": [
|
||||
{
|
||||
"messageId": message.id,
|
||||
"receiptHandle": message.receipt_handle,
|
||||
"body": message.body,
|
||||
"attributes": {
|
||||
"ApproximateReceiveCount": "1",
|
||||
"SentTimestamp": "1545082649183",
|
||||
"SenderId": "AIDAIENQZJOLO23YVJ4VO",
|
||||
"ApproximateFirstReceiveTimestamp": "1545082649185"
|
||||
},
|
||||
"messageAttributes": {},
|
||||
"md5OfBody": "098f6bcd4621d373cade4e832627b4f6",
|
||||
"eventSource": "aws:sqs",
|
||||
"eventSourceARN": queue_arn,
|
||||
"awsRegion": self.region_name
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
request_headers = {}
|
||||
response_headers = {}
|
||||
func.invoke(json.dumps(event), request_headers, response_headers)
|
||||
return 'x-amz-function-error' not in response_headers
|
||||
|
||||
def send_sns_message(self, function_name, message, subject=None, qualifier=None):
|
||||
event = {
|
||||
"Records": [
|
||||
{
|
||||
|
@ -39,6 +39,31 @@ class LambdaResponse(BaseResponse):
|
||||
else:
|
||||
raise ValueError("Cannot handle request")
|
||||
|
||||
def event_source_mappings(self, request, full_url, headers):
|
||||
self.setup_class(request, full_url, headers)
|
||||
if request.method == 'GET':
|
||||
querystring = self.querystring
|
||||
event_source_arn = querystring.get('EventSourceArn', [None])[0]
|
||||
function_name = querystring.get('FunctionName', [None])[0]
|
||||
return self._list_event_source_mappings(event_source_arn, function_name)
|
||||
elif request.method == 'POST':
|
||||
return self._create_event_source_mapping(request, full_url, headers)
|
||||
else:
|
||||
raise ValueError("Cannot handle request")
|
||||
|
||||
def event_source_mapping(self, request, full_url, headers):
|
||||
self.setup_class(request, full_url, headers)
|
||||
path = request.path if hasattr(request, 'path') else path_url(request.url)
|
||||
uuid = path.split('/')[-1]
|
||||
if request.method == 'GET':
|
||||
return self._get_event_source_mapping(uuid)
|
||||
elif request.method == 'PUT':
|
||||
return self._update_event_source_mapping(uuid)
|
||||
elif request.method == 'DELETE':
|
||||
return self._delete_event_source_mapping(uuid)
|
||||
else:
|
||||
raise ValueError("Cannot handle request")
|
||||
|
||||
def function(self, request, full_url, headers):
|
||||
self.setup_class(request, full_url, headers)
|
||||
if request.method == 'GET':
|
||||
@ -177,6 +202,45 @@ class LambdaResponse(BaseResponse):
|
||||
config = fn.get_configuration()
|
||||
return 201, {}, json.dumps(config)
|
||||
|
||||
def _create_event_source_mapping(self, request, full_url, headers):
|
||||
try:
|
||||
fn = self.lambda_backend.create_event_source_mapping(self.json_body)
|
||||
except ValueError as e:
|
||||
return 400, {}, json.dumps({"Error": {"Code": e.args[0], "Message": e.args[1]}})
|
||||
else:
|
||||
config = fn.get_configuration()
|
||||
return 201, {}, json.dumps(config)
|
||||
|
||||
def _list_event_source_mappings(self, event_source_arn, function_name):
|
||||
esms = self.lambda_backend.list_event_source_mappings(event_source_arn, function_name)
|
||||
result = {
|
||||
'EventSourceMappings': [esm.get_configuration() for esm in esms]
|
||||
}
|
||||
return 200, {}, json.dumps(result)
|
||||
|
||||
def _get_event_source_mapping(self, uuid):
|
||||
result = self.lambda_backend.get_event_source_mapping(uuid)
|
||||
if result:
|
||||
return 200, {}, json.dumps(result.get_configuration())
|
||||
else:
|
||||
return 404, {}, "{}"
|
||||
|
||||
def _update_event_source_mapping(self, uuid):
|
||||
result = self.lambda_backend.update_event_source_mapping(uuid, self.json_body)
|
||||
if result:
|
||||
return 202, {}, json.dumps(result.get_configuration())
|
||||
else:
|
||||
return 404, {}, "{}"
|
||||
|
||||
def _delete_event_source_mapping(self, uuid):
|
||||
esm = self.lambda_backend.delete_event_source_mapping(uuid)
|
||||
if esm:
|
||||
json_result = esm.get_configuration()
|
||||
json_result.update({'State': 'Deleting'})
|
||||
return 202, {}, json.dumps(json_result)
|
||||
else:
|
||||
return 404, {}, "{}"
|
||||
|
||||
def _publish_function(self, request, full_url, headers):
|
||||
function_name = self.path.rsplit('/', 2)[-2]
|
||||
|
||||
|
@ -11,6 +11,8 @@ url_paths = {
|
||||
'{0}/(?P<api_version>[^/]+)/functions/?$': response.root,
|
||||
r'{0}/(?P<api_version>[^/]+)/functions/(?P<function_name>[\w_-]+)/?$': response.function,
|
||||
r'{0}/(?P<api_version>[^/]+)/functions/(?P<function_name>[\w_-]+)/versions/?$': response.versions,
|
||||
r'{0}/(?P<api_version>[^/]+)/event-source-mappings/?$': response.event_source_mappings,
|
||||
r'{0}/(?P<api_version>[^/]+)/event-source-mappings/(?P<UUID>[\w_-]+)/?$': response.event_source_mapping,
|
||||
r'{0}/(?P<api_version>[^/]+)/functions/(?P<function_name>[\w_-]+)/invocations/?$': response.invoke,
|
||||
r'{0}/(?P<api_version>[^/]+)/functions/(?P<function_name>[\w_-]+)/invoke-async/?$': response.invoke_async,
|
||||
r'{0}/(?P<api_version>[^/]+)/tags/(?P<resource_arn>.+)': response.tag,
|
||||
|
@ -514,10 +514,13 @@ class BatchBackend(BaseBackend):
|
||||
return self._job_definitions.get(arn)
|
||||
|
||||
def get_job_definition_by_name(self, name):
|
||||
for comp_env in self._job_definitions.values():
|
||||
if comp_env.name == name:
|
||||
return comp_env
|
||||
return None
|
||||
latest_revision = -1
|
||||
latest_job = None
|
||||
for job_def in self._job_definitions.values():
|
||||
if job_def.name == name and job_def.revision > latest_revision:
|
||||
latest_job = job_def
|
||||
latest_revision = job_def.revision
|
||||
return latest_job
|
||||
|
||||
def get_job_definition_by_name_revision(self, name, revision):
|
||||
for job_def in self._job_definitions.values():
|
||||
@ -534,10 +537,13 @@ class BatchBackend(BaseBackend):
|
||||
:return: Job definition or None
|
||||
:rtype: JobDefinition or None
|
||||
"""
|
||||
env = self.get_job_definition_by_arn(identifier)
|
||||
if env is None:
|
||||
env = self.get_job_definition_by_name(identifier)
|
||||
return env
|
||||
job_def = self.get_job_definition_by_arn(identifier)
|
||||
if job_def is None:
|
||||
if ':' in identifier:
|
||||
job_def = self.get_job_definition_by_name_revision(*identifier.split(':', 1))
|
||||
else:
|
||||
job_def = self.get_job_definition_by_name(identifier)
|
||||
return job_def
|
||||
|
||||
def get_job_definitions(self, identifier):
|
||||
"""
|
||||
@ -984,9 +990,7 @@ class BatchBackend(BaseBackend):
|
||||
# TODO parameters, retries (which is a dict raw from request), job dependancies and container overrides are ignored for now
|
||||
|
||||
# Look for job definition
|
||||
job_def = self.get_job_definition_by_arn(job_def_id)
|
||||
if job_def is None and ':' in job_def_id:
|
||||
job_def = self.get_job_definition_by_name_revision(*job_def_id.split(':', 1))
|
||||
job_def = self.get_job_definition(job_def_id)
|
||||
if job_def is None:
|
||||
raise ClientException('Job definition {0} does not exist'.format(job_def_id))
|
||||
|
||||
|
@ -52,6 +52,18 @@ class InvalidResourceTypeException(JsonRESTError):
|
||||
super(InvalidResourceTypeException, self).__init__("ValidationException", message)
|
||||
|
||||
|
||||
class NoSuchConfigurationAggregatorException(JsonRESTError):
|
||||
code = 400
|
||||
|
||||
def __init__(self, number=1):
|
||||
if number == 1:
|
||||
message = 'The configuration aggregator does not exist. Check the configuration aggregator name and try again.'
|
||||
else:
|
||||
message = 'At least one of the configuration aggregators does not exist. Check the configuration aggregator' \
|
||||
' names and try again.'
|
||||
super(NoSuchConfigurationAggregatorException, self).__init__("NoSuchConfigurationAggregatorException", message)
|
||||
|
||||
|
||||
class NoSuchConfigurationRecorderException(JsonRESTError):
|
||||
code = 400
|
||||
|
||||
@ -78,6 +90,14 @@ class NoSuchBucketException(JsonRESTError):
|
||||
super(NoSuchBucketException, self).__init__("NoSuchBucketException", message)
|
||||
|
||||
|
||||
class InvalidNextTokenException(JsonRESTError):
|
||||
code = 400
|
||||
|
||||
def __init__(self):
|
||||
message = 'The nextToken provided is invalid'
|
||||
super(InvalidNextTokenException, self).__init__("InvalidNextTokenException", message)
|
||||
|
||||
|
||||
class InvalidS3KeyPrefixException(JsonRESTError):
|
||||
code = 400
|
||||
|
||||
@ -147,3 +167,66 @@ class LastDeliveryChannelDeleteFailedException(JsonRESTError):
|
||||
message = 'Failed to delete last specified delivery channel with name \'{name}\', because there, ' \
|
||||
'because there is a running configuration recorder.'.format(name=name)
|
||||
super(LastDeliveryChannelDeleteFailedException, self).__init__("LastDeliveryChannelDeleteFailedException", message)
|
||||
|
||||
|
||||
class TooManyAccountSources(JsonRESTError):
|
||||
code = 400
|
||||
|
||||
def __init__(self, length):
|
||||
locations = ['com.amazonaws.xyz'] * length
|
||||
|
||||
message = 'Value \'[{locations}]\' at \'accountAggregationSources\' failed to satisfy constraint: ' \
|
||||
'Member must have length less than or equal to 1'.format(locations=', '.join(locations))
|
||||
super(TooManyAccountSources, self).__init__("ValidationException", message)
|
||||
|
||||
|
||||
class DuplicateTags(JsonRESTError):
|
||||
code = 400
|
||||
|
||||
def __init__(self):
|
||||
super(DuplicateTags, self).__init__(
|
||||
'InvalidInput', 'Duplicate tag keys found. Please note that Tag keys are case insensitive.')
|
||||
|
||||
|
||||
class TagKeyTooBig(JsonRESTError):
|
||||
code = 400
|
||||
|
||||
def __init__(self, tag, param='tags.X.member.key'):
|
||||
super(TagKeyTooBig, self).__init__(
|
||||
'ValidationException', "1 validation error detected: Value '{}' at '{}' failed to satisfy "
|
||||
"constraint: Member must have length less than or equal to 128".format(tag, param))
|
||||
|
||||
|
||||
class TagValueTooBig(JsonRESTError):
|
||||
code = 400
|
||||
|
||||
def __init__(self, tag):
|
||||
super(TagValueTooBig, self).__init__(
|
||||
'ValidationException', "1 validation error detected: Value '{}' at 'tags.X.member.value' failed to satisfy "
|
||||
"constraint: Member must have length less than or equal to 256".format(tag))
|
||||
|
||||
|
||||
class InvalidParameterValueException(JsonRESTError):
|
||||
code = 400
|
||||
|
||||
def __init__(self, message):
|
||||
super(InvalidParameterValueException, self).__init__('InvalidParameterValueException', message)
|
||||
|
||||
|
||||
class InvalidTagCharacters(JsonRESTError):
|
||||
code = 400
|
||||
|
||||
def __init__(self, tag, param='tags.X.member.key'):
|
||||
message = "1 validation error detected: Value '{}' at '{}' failed to satisfy ".format(tag, param)
|
||||
message += 'constraint: Member must satisfy regular expression pattern: [\\\\p{L}\\\\p{Z}\\\\p{N}_.:/=+\\\\-@]+'
|
||||
|
||||
super(InvalidTagCharacters, self).__init__('ValidationException', message)
|
||||
|
||||
|
||||
class TooManyTags(JsonRESTError):
|
||||
code = 400
|
||||
|
||||
def __init__(self, tags, param='tags'):
|
||||
super(TooManyTags, self).__init__(
|
||||
'ValidationException', "1 validation error detected: Value '{}' at '{}' failed to satisfy "
|
||||
"constraint: Member must have length less than or equal to 50.".format(tags, param))
|
||||
|
@ -1,6 +1,9 @@
|
||||
import json
|
||||
import re
|
||||
import time
|
||||
import pkg_resources
|
||||
import random
|
||||
import string
|
||||
|
||||
from datetime import datetime
|
||||
|
||||
@ -12,37 +15,125 @@ from moto.config.exceptions import InvalidResourceTypeException, InvalidDelivery
|
||||
NoSuchConfigurationRecorderException, NoAvailableConfigurationRecorderException, \
|
||||
InvalidDeliveryChannelNameException, NoSuchBucketException, InvalidS3KeyPrefixException, \
|
||||
InvalidSNSTopicARNException, MaxNumberOfDeliveryChannelsExceededException, NoAvailableDeliveryChannelException, \
|
||||
NoSuchDeliveryChannelException, LastDeliveryChannelDeleteFailedException
|
||||
NoSuchDeliveryChannelException, LastDeliveryChannelDeleteFailedException, TagKeyTooBig, \
|
||||
TooManyTags, TagValueTooBig, TooManyAccountSources, InvalidParameterValueException, InvalidNextTokenException, \
|
||||
NoSuchConfigurationAggregatorException, InvalidTagCharacters, DuplicateTags
|
||||
|
||||
from moto.core import BaseBackend, BaseModel
|
||||
|
||||
DEFAULT_ACCOUNT_ID = 123456789012
|
||||
POP_STRINGS = [
|
||||
'capitalizeStart',
|
||||
'CapitalizeStart',
|
||||
'capitalizeArn',
|
||||
'CapitalizeArn',
|
||||
'capitalizeARN',
|
||||
'CapitalizeARN'
|
||||
]
|
||||
DEFAULT_PAGE_SIZE = 100
|
||||
|
||||
|
||||
def datetime2int(date):
|
||||
return int(time.mktime(date.timetuple()))
|
||||
|
||||
|
||||
def snake_to_camels(original):
|
||||
def snake_to_camels(original, cap_start, cap_arn):
|
||||
parts = original.split('_')
|
||||
|
||||
camel_cased = parts[0].lower() + ''.join(p.title() for p in parts[1:])
|
||||
camel_cased = camel_cased.replace('Arn', 'ARN') # Config uses 'ARN' instead of 'Arn'
|
||||
|
||||
if cap_arn:
|
||||
camel_cased = camel_cased.replace('Arn', 'ARN') # Some config services use 'ARN' instead of 'Arn'
|
||||
|
||||
if cap_start:
|
||||
camel_cased = camel_cased[0].upper() + camel_cased[1::]
|
||||
|
||||
return camel_cased
|
||||
|
||||
|
||||
def random_string():
|
||||
"""Returns a random set of 8 lowercase letters for the Config Aggregator ARN"""
|
||||
chars = []
|
||||
for x in range(0, 8):
|
||||
chars.append(random.choice(string.ascii_lowercase))
|
||||
|
||||
return "".join(chars)
|
||||
|
||||
|
||||
def validate_tag_key(tag_key, exception_param='tags.X.member.key'):
|
||||
"""Validates the tag key.
|
||||
|
||||
:param tag_key: The tag key to check against.
|
||||
:param exception_param: The exception parameter to send over to help format the message. This is to reflect
|
||||
the difference between the tag and untag APIs.
|
||||
:return:
|
||||
"""
|
||||
# Validate that the key length is correct:
|
||||
if len(tag_key) > 128:
|
||||
raise TagKeyTooBig(tag_key, param=exception_param)
|
||||
|
||||
# Validate that the tag key fits the proper Regex:
|
||||
# [\w\s_.:/=+\-@]+ SHOULD be the same as the Java regex on the AWS documentation: [\p{L}\p{Z}\p{N}_.:/=+\-@]+
|
||||
match = re.findall(r'[\w\s_.:/=+\-@]+', tag_key)
|
||||
# Kudos if you can come up with a better way of doing a global search :)
|
||||
if not len(match) or len(match[0]) < len(tag_key):
|
||||
raise InvalidTagCharacters(tag_key, param=exception_param)
|
||||
|
||||
|
||||
def check_tag_duplicate(all_tags, tag_key):
|
||||
"""Validates that a tag key is not a duplicate
|
||||
|
||||
:param all_tags: Dict to check if there is a duplicate tag.
|
||||
:param tag_key: The tag key to check against.
|
||||
:return:
|
||||
"""
|
||||
if all_tags.get(tag_key):
|
||||
raise DuplicateTags()
|
||||
|
||||
|
||||
def validate_tags(tags):
|
||||
proper_tags = {}
|
||||
|
||||
if len(tags) > 50:
|
||||
raise TooManyTags(tags)
|
||||
|
||||
for tag in tags:
|
||||
# Validate the Key:
|
||||
validate_tag_key(tag['Key'])
|
||||
check_tag_duplicate(proper_tags, tag['Key'])
|
||||
|
||||
# Validate the Value:
|
||||
if len(tag['Value']) > 256:
|
||||
raise TagValueTooBig(tag['Value'])
|
||||
|
||||
proper_tags[tag['Key']] = tag['Value']
|
||||
|
||||
return proper_tags
|
||||
|
||||
|
||||
class ConfigEmptyDictable(BaseModel):
|
||||
"""Base class to make serialization easy. This assumes that the sub-class will NOT return 'None's in the JSON."""
|
||||
|
||||
def __init__(self, capitalize_start=False, capitalize_arn=True):
|
||||
"""Assists with the serialization of the config object
|
||||
:param capitalize_start: For some Config services, the first letter is lowercase -- for others it's capital
|
||||
:param capitalize_arn: For some Config services, the API expects 'ARN' and for others, it expects 'Arn'
|
||||
"""
|
||||
self.capitalize_start = capitalize_start
|
||||
self.capitalize_arn = capitalize_arn
|
||||
|
||||
def to_dict(self):
|
||||
data = {}
|
||||
for item, value in self.__dict__.items():
|
||||
if value is not None:
|
||||
if isinstance(value, ConfigEmptyDictable):
|
||||
data[snake_to_camels(item)] = value.to_dict()
|
||||
data[snake_to_camels(item, self.capitalize_start, self.capitalize_arn)] = value.to_dict()
|
||||
else:
|
||||
data[snake_to_camels(item)] = value
|
||||
data[snake_to_camels(item, self.capitalize_start, self.capitalize_arn)] = value
|
||||
|
||||
# Cleanse the extra properties:
|
||||
for prop in POP_STRINGS:
|
||||
data.pop(prop, None)
|
||||
|
||||
return data
|
||||
|
||||
@ -50,8 +141,9 @@ class ConfigEmptyDictable(BaseModel):
|
||||
class ConfigRecorderStatus(ConfigEmptyDictable):
|
||||
|
||||
def __init__(self, name):
|
||||
self.name = name
|
||||
super(ConfigRecorderStatus, self).__init__()
|
||||
|
||||
self.name = name
|
||||
self.recording = False
|
||||
self.last_start_time = None
|
||||
self.last_stop_time = None
|
||||
@ -75,12 +167,16 @@ class ConfigRecorderStatus(ConfigEmptyDictable):
|
||||
class ConfigDeliverySnapshotProperties(ConfigEmptyDictable):
|
||||
|
||||
def __init__(self, delivery_frequency):
|
||||
super(ConfigDeliverySnapshotProperties, self).__init__()
|
||||
|
||||
self.delivery_frequency = delivery_frequency
|
||||
|
||||
|
||||
class ConfigDeliveryChannel(ConfigEmptyDictable):
|
||||
|
||||
def __init__(self, name, s3_bucket_name, prefix=None, sns_arn=None, snapshot_properties=None):
|
||||
super(ConfigDeliveryChannel, self).__init__()
|
||||
|
||||
self.name = name
|
||||
self.s3_bucket_name = s3_bucket_name
|
||||
self.s3_key_prefix = prefix
|
||||
@ -91,6 +187,8 @@ class ConfigDeliveryChannel(ConfigEmptyDictable):
|
||||
class RecordingGroup(ConfigEmptyDictable):
|
||||
|
||||
def __init__(self, all_supported=True, include_global_resource_types=False, resource_types=None):
|
||||
super(RecordingGroup, self).__init__()
|
||||
|
||||
self.all_supported = all_supported
|
||||
self.include_global_resource_types = include_global_resource_types
|
||||
self.resource_types = resource_types
|
||||
@ -99,6 +197,8 @@ class RecordingGroup(ConfigEmptyDictable):
|
||||
class ConfigRecorder(ConfigEmptyDictable):
|
||||
|
||||
def __init__(self, role_arn, recording_group, name='default', status=None):
|
||||
super(ConfigRecorder, self).__init__()
|
||||
|
||||
self.name = name
|
||||
self.role_arn = role_arn
|
||||
self.recording_group = recording_group
|
||||
@ -109,18 +209,118 @@ class ConfigRecorder(ConfigEmptyDictable):
|
||||
self.status = status
|
||||
|
||||
|
||||
class AccountAggregatorSource(ConfigEmptyDictable):
|
||||
|
||||
def __init__(self, account_ids, aws_regions=None, all_aws_regions=None):
|
||||
super(AccountAggregatorSource, self).__init__(capitalize_start=True)
|
||||
|
||||
# Can't have both the regions and all_regions flag present -- also can't have them both missing:
|
||||
if aws_regions and all_aws_regions:
|
||||
raise InvalidParameterValueException('Your configuration aggregator contains a list of regions and also specifies '
|
||||
'the use of all regions. You must choose one of these options.')
|
||||
|
||||
if not (aws_regions or all_aws_regions):
|
||||
raise InvalidParameterValueException('Your request does not specify any regions. Select AWS Config-supported '
|
||||
'regions and try again.')
|
||||
|
||||
self.account_ids = account_ids
|
||||
self.aws_regions = aws_regions
|
||||
|
||||
if not all_aws_regions:
|
||||
all_aws_regions = False
|
||||
|
||||
self.all_aws_regions = all_aws_regions
|
||||
|
||||
|
||||
class OrganizationAggregationSource(ConfigEmptyDictable):
|
||||
|
||||
def __init__(self, role_arn, aws_regions=None, all_aws_regions=None):
|
||||
super(OrganizationAggregationSource, self).__init__(capitalize_start=True, capitalize_arn=False)
|
||||
|
||||
# Can't have both the regions and all_regions flag present -- also can't have them both missing:
|
||||
if aws_regions and all_aws_regions:
|
||||
raise InvalidParameterValueException('Your configuration aggregator contains a list of regions and also specifies '
|
||||
'the use of all regions. You must choose one of these options.')
|
||||
|
||||
if not (aws_regions or all_aws_regions):
|
||||
raise InvalidParameterValueException('Your request does not specify any regions. Select AWS Config-supported '
|
||||
'regions and try again.')
|
||||
|
||||
self.role_arn = role_arn
|
||||
self.aws_regions = aws_regions
|
||||
|
||||
if not all_aws_regions:
|
||||
all_aws_regions = False
|
||||
|
||||
self.all_aws_regions = all_aws_regions
|
||||
|
||||
|
||||
class ConfigAggregator(ConfigEmptyDictable):
|
||||
|
||||
def __init__(self, name, region, account_sources=None, org_source=None, tags=None):
|
||||
super(ConfigAggregator, self).__init__(capitalize_start=True, capitalize_arn=False)
|
||||
|
||||
self.configuration_aggregator_name = name
|
||||
self.configuration_aggregator_arn = 'arn:aws:config:{region}:{id}:config-aggregator/config-aggregator-{random}'.format(
|
||||
region=region,
|
||||
id=DEFAULT_ACCOUNT_ID,
|
||||
random=random_string()
|
||||
)
|
||||
self.account_aggregation_sources = account_sources
|
||||
self.organization_aggregation_source = org_source
|
||||
self.creation_time = datetime2int(datetime.utcnow())
|
||||
self.last_updated_time = datetime2int(datetime.utcnow())
|
||||
|
||||
# Tags are listed in the list_tags_for_resource API call ... not implementing yet -- please feel free to!
|
||||
self.tags = tags or {}
|
||||
|
||||
# Override the to_dict so that we can format the tags properly...
|
||||
def to_dict(self):
|
||||
result = super(ConfigAggregator, self).to_dict()
|
||||
|
||||
# Override the account aggregation sources if present:
|
||||
if self.account_aggregation_sources:
|
||||
result['AccountAggregationSources'] = [a.to_dict() for a in self.account_aggregation_sources]
|
||||
|
||||
# Tags are listed in the list_tags_for_resource API call ... not implementing yet -- please feel free to!
|
||||
# if self.tags:
|
||||
# result['Tags'] = [{'Key': key, 'Value': value} for key, value in self.tags.items()]
|
||||
|
||||
return result
|
||||
|
||||
|
||||
class ConfigAggregationAuthorization(ConfigEmptyDictable):
|
||||
|
||||
def __init__(self, current_region, authorized_account_id, authorized_aws_region, tags=None):
|
||||
super(ConfigAggregationAuthorization, self).__init__(capitalize_start=True, capitalize_arn=False)
|
||||
|
||||
self.aggregation_authorization_arn = 'arn:aws:config:{region}:{id}:aggregation-authorization/' \
|
||||
'{auth_account}/{auth_region}'.format(region=current_region,
|
||||
id=DEFAULT_ACCOUNT_ID,
|
||||
auth_account=authorized_account_id,
|
||||
auth_region=authorized_aws_region)
|
||||
self.authorized_account_id = authorized_account_id
|
||||
self.authorized_aws_region = authorized_aws_region
|
||||
self.creation_time = datetime2int(datetime.utcnow())
|
||||
|
||||
# Tags are listed in the list_tags_for_resource API call ... not implementing yet -- please feel free to!
|
||||
self.tags = tags or {}
|
||||
|
||||
|
||||
class ConfigBackend(BaseBackend):
|
||||
|
||||
def __init__(self):
|
||||
self.recorders = {}
|
||||
self.delivery_channels = {}
|
||||
self.config_aggregators = {}
|
||||
self.aggregation_authorizations = {}
|
||||
|
||||
@staticmethod
|
||||
def _validate_resource_types(resource_list):
|
||||
# Load the service file:
|
||||
resource_package = 'botocore'
|
||||
resource_path = '/'.join(('data', 'config', '2014-11-12', 'service-2.json'))
|
||||
conifg_schema = json.loads(pkg_resources.resource_string(resource_package, resource_path))
|
||||
config_schema = json.loads(pkg_resources.resource_string(resource_package, resource_path))
|
||||
|
||||
# Verify that each entry exists in the supported list:
|
||||
bad_list = []
|
||||
@ -128,11 +328,11 @@ class ConfigBackend(BaseBackend):
|
||||
# For PY2:
|
||||
r_str = str(resource)
|
||||
|
||||
if r_str not in conifg_schema['shapes']['ResourceType']['enum']:
|
||||
if r_str not in config_schema['shapes']['ResourceType']['enum']:
|
||||
bad_list.append(r_str)
|
||||
|
||||
if bad_list:
|
||||
raise InvalidResourceTypeException(bad_list, conifg_schema['shapes']['ResourceType']['enum'])
|
||||
raise InvalidResourceTypeException(bad_list, config_schema['shapes']['ResourceType']['enum'])
|
||||
|
||||
@staticmethod
|
||||
def _validate_delivery_snapshot_properties(properties):
|
||||
@ -147,6 +347,158 @@ class ConfigBackend(BaseBackend):
|
||||
raise InvalidDeliveryFrequency(properties.get('deliveryFrequency', None),
|
||||
conifg_schema['shapes']['MaximumExecutionFrequency']['enum'])
|
||||
|
||||
def put_configuration_aggregator(self, config_aggregator, region):
|
||||
# Validate the name:
|
||||
if len(config_aggregator['ConfigurationAggregatorName']) > 256:
|
||||
raise NameTooLongException(config_aggregator['ConfigurationAggregatorName'], 'configurationAggregatorName')
|
||||
|
||||
account_sources = None
|
||||
org_source = None
|
||||
|
||||
# Tag validation:
|
||||
tags = validate_tags(config_aggregator.get('Tags', []))
|
||||
|
||||
# Exception if both AccountAggregationSources and OrganizationAggregationSource are supplied:
|
||||
if config_aggregator.get('AccountAggregationSources') and config_aggregator.get('OrganizationAggregationSource'):
|
||||
raise InvalidParameterValueException('The configuration aggregator cannot be created because your request contains both the'
|
||||
' AccountAggregationSource and the OrganizationAggregationSource. Include only '
|
||||
'one aggregation source and try again.')
|
||||
|
||||
# If neither are supplied:
|
||||
if not config_aggregator.get('AccountAggregationSources') and not config_aggregator.get('OrganizationAggregationSource'):
|
||||
raise InvalidParameterValueException('The configuration aggregator cannot be created because your request is missing either '
|
||||
'the AccountAggregationSource or the OrganizationAggregationSource. Include the '
|
||||
'appropriate aggregation source and try again.')
|
||||
|
||||
if config_aggregator.get('AccountAggregationSources'):
|
||||
# Currently, only 1 account aggregation source can be set:
|
||||
if len(config_aggregator['AccountAggregationSources']) > 1:
|
||||
raise TooManyAccountSources(len(config_aggregator['AccountAggregationSources']))
|
||||
|
||||
account_sources = []
|
||||
for a in config_aggregator['AccountAggregationSources']:
|
||||
account_sources.append(AccountAggregatorSource(a['AccountIds'], aws_regions=a.get('AwsRegions'),
|
||||
all_aws_regions=a.get('AllAwsRegions')))
|
||||
|
||||
else:
|
||||
org_source = OrganizationAggregationSource(config_aggregator['OrganizationAggregationSource']['RoleArn'],
|
||||
aws_regions=config_aggregator['OrganizationAggregationSource'].get('AwsRegions'),
|
||||
all_aws_regions=config_aggregator['OrganizationAggregationSource'].get(
|
||||
'AllAwsRegions'))
|
||||
|
||||
# Grab the existing one if it exists and update it:
|
||||
if not self.config_aggregators.get(config_aggregator['ConfigurationAggregatorName']):
|
||||
aggregator = ConfigAggregator(config_aggregator['ConfigurationAggregatorName'], region, account_sources=account_sources,
|
||||
org_source=org_source, tags=tags)
|
||||
self.config_aggregators[config_aggregator['ConfigurationAggregatorName']] = aggregator
|
||||
|
||||
else:
|
||||
aggregator = self.config_aggregators[config_aggregator['ConfigurationAggregatorName']]
|
||||
aggregator.tags = tags
|
||||
aggregator.account_aggregation_sources = account_sources
|
||||
aggregator.organization_aggregation_source = org_source
|
||||
aggregator.last_updated_time = datetime2int(datetime.utcnow())
|
||||
|
||||
return aggregator.to_dict()
|
||||
|
||||
def describe_configuration_aggregators(self, names, token, limit):
|
||||
limit = DEFAULT_PAGE_SIZE if not limit or limit < 0 else limit
|
||||
agg_list = []
|
||||
result = {'ConfigurationAggregators': []}
|
||||
|
||||
if names:
|
||||
for name in names:
|
||||
if not self.config_aggregators.get(name):
|
||||
raise NoSuchConfigurationAggregatorException(number=len(names))
|
||||
|
||||
agg_list.append(name)
|
||||
|
||||
else:
|
||||
agg_list = list(self.config_aggregators.keys())
|
||||
|
||||
# Empty?
|
||||
if not agg_list:
|
||||
return result
|
||||
|
||||
# Sort by name:
|
||||
sorted_aggregators = sorted(agg_list)
|
||||
|
||||
# Get the start:
|
||||
if not token:
|
||||
start = 0
|
||||
else:
|
||||
# Tokens for this moto feature are just the next names of the items in the list:
|
||||
if not self.config_aggregators.get(token):
|
||||
raise InvalidNextTokenException()
|
||||
|
||||
start = sorted_aggregators.index(token)
|
||||
|
||||
# Get the list of items to collect:
|
||||
agg_list = sorted_aggregators[start:(start + limit)]
|
||||
result['ConfigurationAggregators'] = [self.config_aggregators[agg].to_dict() for agg in agg_list]
|
||||
|
||||
if len(sorted_aggregators) > (start + limit):
|
||||
result['NextToken'] = sorted_aggregators[start + limit]
|
||||
|
||||
return result
|
||||
|
||||
def delete_configuration_aggregator(self, config_aggregator):
|
||||
if not self.config_aggregators.get(config_aggregator):
|
||||
raise NoSuchConfigurationAggregatorException()
|
||||
|
||||
del self.config_aggregators[config_aggregator]
|
||||
|
||||
def put_aggregation_authorization(self, current_region, authorized_account, authorized_region, tags):
|
||||
# Tag validation:
|
||||
tags = validate_tags(tags or [])
|
||||
|
||||
# Does this already exist?
|
||||
key = '{}/{}'.format(authorized_account, authorized_region)
|
||||
agg_auth = self.aggregation_authorizations.get(key)
|
||||
if not agg_auth:
|
||||
agg_auth = ConfigAggregationAuthorization(current_region, authorized_account, authorized_region, tags=tags)
|
||||
self.aggregation_authorizations['{}/{}'.format(authorized_account, authorized_region)] = agg_auth
|
||||
else:
|
||||
# Only update the tags:
|
||||
agg_auth.tags = tags
|
||||
|
||||
return agg_auth.to_dict()
|
||||
|
||||
def describe_aggregation_authorizations(self, token, limit):
|
||||
limit = DEFAULT_PAGE_SIZE if not limit or limit < 0 else limit
|
||||
result = {'AggregationAuthorizations': []}
|
||||
|
||||
if not self.aggregation_authorizations:
|
||||
return result
|
||||
|
||||
# Sort by name:
|
||||
sorted_authorizations = sorted(self.aggregation_authorizations.keys())
|
||||
|
||||
# Get the start:
|
||||
if not token:
|
||||
start = 0
|
||||
else:
|
||||
# Tokens for this moto feature are just the next names of the items in the list:
|
||||
if not self.aggregation_authorizations.get(token):
|
||||
raise InvalidNextTokenException()
|
||||
|
||||
start = sorted_authorizations.index(token)
|
||||
|
||||
# Get the list of items to collect:
|
||||
auth_list = sorted_authorizations[start:(start + limit)]
|
||||
result['AggregationAuthorizations'] = [self.aggregation_authorizations[auth].to_dict() for auth in auth_list]
|
||||
|
||||
if len(sorted_authorizations) > (start + limit):
|
||||
result['NextToken'] = sorted_authorizations[start + limit]
|
||||
|
||||
return result
|
||||
|
||||
def delete_aggregation_authorization(self, authorized_account, authorized_region):
|
||||
# This will always return a 200 -- regardless if there is or isn't an existing
|
||||
# aggregation authorization.
|
||||
key = '{}/{}'.format(authorized_account, authorized_region)
|
||||
self.aggregation_authorizations.pop(key, None)
|
||||
|
||||
def put_configuration_recorder(self, config_recorder):
|
||||
# Validate the name:
|
||||
if not config_recorder.get('name'):
|
||||
|
@ -13,6 +13,39 @@ class ConfigResponse(BaseResponse):
|
||||
self.config_backend.put_configuration_recorder(self._get_param('ConfigurationRecorder'))
|
||||
return ""
|
||||
|
||||
def put_configuration_aggregator(self):
|
||||
aggregator = self.config_backend.put_configuration_aggregator(json.loads(self.body), self.region)
|
||||
schema = {'ConfigurationAggregator': aggregator}
|
||||
return json.dumps(schema)
|
||||
|
||||
def describe_configuration_aggregators(self):
|
||||
aggregators = self.config_backend.describe_configuration_aggregators(self._get_param('ConfigurationAggregatorNames'),
|
||||
self._get_param('NextToken'),
|
||||
self._get_param('Limit'))
|
||||
return json.dumps(aggregators)
|
||||
|
||||
def delete_configuration_aggregator(self):
|
||||
self.config_backend.delete_configuration_aggregator(self._get_param('ConfigurationAggregatorName'))
|
||||
return ""
|
||||
|
||||
def put_aggregation_authorization(self):
|
||||
agg_auth = self.config_backend.put_aggregation_authorization(self.region,
|
||||
self._get_param('AuthorizedAccountId'),
|
||||
self._get_param('AuthorizedAwsRegion'),
|
||||
self._get_param('Tags'))
|
||||
schema = {'AggregationAuthorization': agg_auth}
|
||||
return json.dumps(schema)
|
||||
|
||||
def describe_aggregation_authorizations(self):
|
||||
authorizations = self.config_backend.describe_aggregation_authorizations(self._get_param('NextToken'), self._get_param('Limit'))
|
||||
|
||||
return json.dumps(authorizations)
|
||||
|
||||
def delete_aggregation_authorization(self):
|
||||
self.config_backend.delete_aggregation_authorization(self._get_param('AuthorizedAccountId'), self._get_param('AuthorizedAwsRegion'))
|
||||
|
||||
return ""
|
||||
|
||||
def describe_configuration_recorders(self):
|
||||
recorders = self.config_backend.describe_configuration_recorders(self._get_param('ConfigurationRecorderNames'))
|
||||
schema = {'ConfigurationRecorders': recorders}
|
||||
|
@ -106,7 +106,7 @@ class AssumedRoleAccessKey(object):
|
||||
self._access_key_id = access_key_id
|
||||
self._secret_access_key = assumed_role.secret_access_key
|
||||
self._session_token = assumed_role.session_token
|
||||
self._owner_role_name = assumed_role.arn.split("/")[-1]
|
||||
self._owner_role_name = assumed_role.role_arn.split("/")[-1]
|
||||
self._session_name = assumed_role.session_name
|
||||
if headers["X-Amz-Security-Token"] != self._session_token:
|
||||
raise CreateAccessKeyFailure(reason="InvalidToken")
|
||||
@ -172,6 +172,8 @@ class IAMRequestBase(object):
|
||||
self._raise_signature_does_not_match()
|
||||
|
||||
def check_action_permitted(self):
|
||||
if self._action == 'sts:GetCallerIdentity': # always allowed, even if there's an explicit Deny for it
|
||||
return True
|
||||
policies = self._access_key.collect_policies()
|
||||
|
||||
permitted = False
|
||||
|
@ -318,6 +318,9 @@ class DynamoHandler(BaseResponse):
|
||||
|
||||
for table_name, table_request in table_batches.items():
|
||||
keys = table_request['Keys']
|
||||
if self._contains_duplicates(keys):
|
||||
er = 'com.amazon.coral.validate#ValidationException'
|
||||
return self.error(er, 'Provided list of item keys contains duplicates')
|
||||
attributes_to_get = table_request.get('AttributesToGet')
|
||||
results["Responses"][table_name] = []
|
||||
for key in keys:
|
||||
@ -333,6 +336,15 @@ class DynamoHandler(BaseResponse):
|
||||
})
|
||||
return dynamo_json_dump(results)
|
||||
|
||||
def _contains_duplicates(self, keys):
|
||||
unique_keys = []
|
||||
for k in keys:
|
||||
if k in unique_keys:
|
||||
return True
|
||||
else:
|
||||
unique_keys.append(k)
|
||||
return False
|
||||
|
||||
def query(self):
|
||||
name = self.body['TableName']
|
||||
# {u'KeyConditionExpression': u'#n0 = :v0', u'ExpressionAttributeValues': {u':v0': {u'S': u'johndoe'}}, u'ExpressionAttributeNames': {u'#n0': u'username'}}
|
||||
@ -600,7 +612,7 @@ class DynamoHandler(BaseResponse):
|
||||
# E.g. `a = b + c` -> `a=b+c`
|
||||
if update_expression:
|
||||
update_expression = re.sub(
|
||||
'\s*([=\+-])\s*', '\\1', update_expression)
|
||||
r'\s*([=\+-])\s*', '\\1', update_expression)
|
||||
|
||||
try:
|
||||
item = self.dynamodb_backend.update_item(
|
||||
|
@ -523,3 +523,11 @@ class OperationNotPermitted3(EC2ClientError):
|
||||
pcx_id,
|
||||
acceptor_region)
|
||||
)
|
||||
|
||||
|
||||
class InvalidLaunchTemplateNameError(EC2ClientError):
|
||||
def __init__(self):
|
||||
super(InvalidLaunchTemplateNameError, self).__init__(
|
||||
"InvalidLaunchTemplateName.AlreadyExistsException",
|
||||
"Launch template name already in use."
|
||||
)
|
||||
|
@ -20,7 +20,6 @@ from boto.ec2.blockdevicemapping import BlockDeviceMapping, BlockDeviceType
|
||||
from boto.ec2.spotinstancerequest import SpotInstanceRequest as BotoSpotRequest
|
||||
from boto.ec2.launchspecification import LaunchSpecification
|
||||
|
||||
|
||||
from moto.compat import OrderedDict
|
||||
from moto.core import BaseBackend
|
||||
from moto.core.models import Model, BaseModel
|
||||
@ -49,6 +48,7 @@ from .exceptions import (
|
||||
InvalidKeyPairDuplicateError,
|
||||
InvalidKeyPairFormatError,
|
||||
InvalidKeyPairNameError,
|
||||
InvalidLaunchTemplateNameError,
|
||||
InvalidNetworkAclIdError,
|
||||
InvalidNetworkAttachmentIdError,
|
||||
InvalidNetworkInterfaceIdError,
|
||||
@ -98,6 +98,7 @@ from .utils import (
|
||||
random_internet_gateway_id,
|
||||
random_ip,
|
||||
random_ipv6_cidr,
|
||||
random_launch_template_id,
|
||||
random_nat_gateway_id,
|
||||
random_key_pair,
|
||||
random_private_ip,
|
||||
@ -4113,6 +4114,92 @@ class NatGatewayBackend(object):
|
||||
return self.nat_gateways.pop(nat_gateway_id)
|
||||
|
||||
|
||||
class LaunchTemplateVersion(object):
|
||||
def __init__(self, template, number, data, description):
|
||||
self.template = template
|
||||
self.number = number
|
||||
self.data = data
|
||||
self.description = description
|
||||
self.create_time = utc_date_and_time()
|
||||
|
||||
|
||||
class LaunchTemplate(TaggedEC2Resource):
|
||||
def __init__(self, backend, name, template_data, version_description):
|
||||
self.ec2_backend = backend
|
||||
self.name = name
|
||||
self.id = random_launch_template_id()
|
||||
self.create_time = utc_date_and_time()
|
||||
|
||||
self.versions = []
|
||||
self.create_version(template_data, version_description)
|
||||
self.default_version_number = 1
|
||||
|
||||
def create_version(self, data, description):
|
||||
num = len(self.versions) + 1
|
||||
version = LaunchTemplateVersion(self, num, data, description)
|
||||
self.versions.append(version)
|
||||
return version
|
||||
|
||||
def is_default(self, version):
|
||||
return self.default_version == version.number
|
||||
|
||||
def get_version(self, num):
|
||||
return self.versions[num - 1]
|
||||
|
||||
def default_version(self):
|
||||
return self.versions[self.default_version_number - 1]
|
||||
|
||||
def latest_version(self):
|
||||
return self.versions[-1]
|
||||
|
||||
@property
|
||||
def latest_version_number(self):
|
||||
return self.latest_version().number
|
||||
|
||||
def get_filter_value(self, filter_name):
|
||||
if filter_name == 'launch-template-name':
|
||||
return self.name
|
||||
else:
|
||||
return super(LaunchTemplate, self).get_filter_value(
|
||||
filter_name, "DescribeLaunchTemplates")
|
||||
|
||||
|
||||
class LaunchTemplateBackend(object):
|
||||
def __init__(self):
|
||||
self.launch_template_name_to_ids = {}
|
||||
self.launch_templates = OrderedDict()
|
||||
self.launch_template_insert_order = []
|
||||
super(LaunchTemplateBackend, self).__init__()
|
||||
|
||||
def create_launch_template(self, name, description, template_data):
|
||||
if name in self.launch_template_name_to_ids:
|
||||
raise InvalidLaunchTemplateNameError()
|
||||
template = LaunchTemplate(self, name, template_data, description)
|
||||
self.launch_templates[template.id] = template
|
||||
self.launch_template_name_to_ids[template.name] = template.id
|
||||
self.launch_template_insert_order.append(template.id)
|
||||
return template
|
||||
|
||||
def get_launch_template(self, template_id):
|
||||
return self.launch_templates[template_id]
|
||||
|
||||
def get_launch_template_by_name(self, name):
|
||||
return self.get_launch_template(self.launch_template_name_to_ids[name])
|
||||
|
||||
def get_launch_templates(self, template_names=None, template_ids=None, filters=None):
|
||||
if template_names and not template_ids:
|
||||
template_ids = []
|
||||
for name in template_names:
|
||||
template_ids.append(self.launch_template_name_to_ids[name])
|
||||
|
||||
if template_ids:
|
||||
templates = [self.launch_templates[tid] for tid in template_ids]
|
||||
else:
|
||||
templates = list(self.launch_templates.values())
|
||||
|
||||
return generic_filter(filters, templates)
|
||||
|
||||
|
||||
class EC2Backend(BaseBackend, InstanceBackend, TagBackend, EBSBackend,
|
||||
RegionsAndZonesBackend, SecurityGroupBackend, AmiBackend,
|
||||
VPCBackend, SubnetBackend, SubnetRouteTableAssociationBackend,
|
||||
@ -4122,7 +4209,7 @@ class EC2Backend(BaseBackend, InstanceBackend, TagBackend, EBSBackend,
|
||||
VPCGatewayAttachmentBackend, SpotFleetBackend,
|
||||
SpotRequestBackend, ElasticAddressBackend, KeyPairBackend,
|
||||
DHCPOptionsSetBackend, NetworkAclBackend, VpnGatewayBackend,
|
||||
CustomerGatewayBackend, NatGatewayBackend):
|
||||
CustomerGatewayBackend, NatGatewayBackend, LaunchTemplateBackend):
|
||||
def __init__(self, region_name):
|
||||
self.region_name = region_name
|
||||
super(EC2Backend, self).__init__()
|
||||
@ -4177,6 +4264,8 @@ class EC2Backend(BaseBackend, InstanceBackend, TagBackend, EBSBackend,
|
||||
elif resource_prefix == EC2_RESOURCE_TO_PREFIX['internet-gateway']:
|
||||
self.describe_internet_gateways(
|
||||
internet_gateway_ids=[resource_id])
|
||||
elif resource_prefix == EC2_RESOURCE_TO_PREFIX['launch-template']:
|
||||
self.get_launch_template(resource_id)
|
||||
elif resource_prefix == EC2_RESOURCE_TO_PREFIX['network-acl']:
|
||||
self.get_all_network_acls()
|
||||
elif resource_prefix == EC2_RESOURCE_TO_PREFIX['network-interface']:
|
||||
|
@ -14,6 +14,7 @@ from .instances import InstanceResponse
|
||||
from .internet_gateways import InternetGateways
|
||||
from .ip_addresses import IPAddresses
|
||||
from .key_pairs import KeyPairs
|
||||
from .launch_templates import LaunchTemplates
|
||||
from .monitoring import Monitoring
|
||||
from .network_acls import NetworkACLs
|
||||
from .placement_groups import PlacementGroups
|
||||
@ -49,6 +50,7 @@ class EC2Response(
|
||||
InternetGateways,
|
||||
IPAddresses,
|
||||
KeyPairs,
|
||||
LaunchTemplates,
|
||||
Monitoring,
|
||||
NetworkACLs,
|
||||
PlacementGroups,
|
||||
|
252
moto/ec2/responses/launch_templates.py
Normal file
252
moto/ec2/responses/launch_templates.py
Normal file
@ -0,0 +1,252 @@
|
||||
import six
|
||||
import uuid
|
||||
from moto.core.responses import BaseResponse
|
||||
from moto.ec2.models import OWNER_ID
|
||||
from moto.ec2.exceptions import FilterNotImplementedError
|
||||
from moto.ec2.utils import filters_from_querystring
|
||||
|
||||
from xml.etree import ElementTree
|
||||
from xml.dom import minidom
|
||||
|
||||
|
||||
def xml_root(name):
|
||||
root = ElementTree.Element(name, {
|
||||
"xmlns": "http://ec2.amazonaws.com/doc/2016-11-15/"
|
||||
})
|
||||
request_id = str(uuid.uuid4()) + "example"
|
||||
ElementTree.SubElement(root, "requestId").text = request_id
|
||||
|
||||
return root
|
||||
|
||||
|
||||
def xml_serialize(tree, key, value):
|
||||
name = key[0].lower() + key[1:]
|
||||
if isinstance(value, list):
|
||||
if name[-1] == 's':
|
||||
name = name[:-1]
|
||||
|
||||
name = name + 'Set'
|
||||
|
||||
node = ElementTree.SubElement(tree, name)
|
||||
|
||||
if isinstance(value, (str, int, float, six.text_type)):
|
||||
node.text = str(value)
|
||||
elif isinstance(value, dict):
|
||||
for dictkey, dictvalue in six.iteritems(value):
|
||||
xml_serialize(node, dictkey, dictvalue)
|
||||
elif isinstance(value, list):
|
||||
for item in value:
|
||||
xml_serialize(node, 'item', item)
|
||||
elif value is None:
|
||||
pass
|
||||
else:
|
||||
raise NotImplementedError("Don't know how to serialize \"{}\" to xml".format(value.__class__))
|
||||
|
||||
|
||||
def pretty_xml(tree):
|
||||
rough = ElementTree.tostring(tree, 'utf-8')
|
||||
parsed = minidom.parseString(rough)
|
||||
return parsed.toprettyxml(indent=' ')
|
||||
|
||||
|
||||
def parse_object(raw_data):
|
||||
out_data = {}
|
||||
for key, value in six.iteritems(raw_data):
|
||||
key_fix_splits = key.split("_")
|
||||
key_len = len(key_fix_splits)
|
||||
|
||||
new_key = ""
|
||||
for i in range(0, key_len):
|
||||
new_key += key_fix_splits[i][0].upper() + key_fix_splits[i][1:]
|
||||
|
||||
data = out_data
|
||||
splits = new_key.split(".")
|
||||
for split in splits[:-1]:
|
||||
if split not in data:
|
||||
data[split] = {}
|
||||
data = data[split]
|
||||
|
||||
data[splits[-1]] = value
|
||||
|
||||
out_data = parse_lists(out_data)
|
||||
return out_data
|
||||
|
||||
|
||||
def parse_lists(data):
|
||||
for key, value in six.iteritems(data):
|
||||
if isinstance(value, dict):
|
||||
keys = data[key].keys()
|
||||
is_list = all(map(lambda k: k.isnumeric(), keys))
|
||||
|
||||
if is_list:
|
||||
new_value = []
|
||||
keys = sorted(list(keys))
|
||||
for k in keys:
|
||||
lvalue = value[k]
|
||||
if isinstance(lvalue, dict):
|
||||
lvalue = parse_lists(lvalue)
|
||||
new_value.append(lvalue)
|
||||
data[key] = new_value
|
||||
return data
|
||||
|
||||
|
||||
class LaunchTemplates(BaseResponse):
|
||||
def create_launch_template(self):
|
||||
name = self._get_param('LaunchTemplateName')
|
||||
version_description = self._get_param('VersionDescription')
|
||||
tag_spec = self._parse_tag_specification("TagSpecification")
|
||||
|
||||
raw_template_data = self._get_dict_param('LaunchTemplateData.')
|
||||
parsed_template_data = parse_object(raw_template_data)
|
||||
|
||||
if self.is_not_dryrun('CreateLaunchTemplate'):
|
||||
if tag_spec:
|
||||
if 'TagSpecifications' not in parsed_template_data:
|
||||
parsed_template_data['TagSpecifications'] = []
|
||||
converted_tag_spec = []
|
||||
for resource_type, tags in six.iteritems(tag_spec):
|
||||
converted_tag_spec.append({
|
||||
"ResourceType": resource_type,
|
||||
"Tags": [{"Key": key, "Value": value} for key, value in six.iteritems(tags)],
|
||||
})
|
||||
|
||||
parsed_template_data['TagSpecifications'].extend(converted_tag_spec)
|
||||
|
||||
template = self.ec2_backend.create_launch_template(name, version_description, parsed_template_data)
|
||||
version = template.default_version()
|
||||
|
||||
tree = xml_root("CreateLaunchTemplateResponse")
|
||||
xml_serialize(tree, "launchTemplate", {
|
||||
"createTime": version.create_time,
|
||||
"createdBy": "arn:aws:iam::{OWNER_ID}:root".format(OWNER_ID=OWNER_ID),
|
||||
"defaultVersionNumber": template.default_version_number,
|
||||
"latestVersionNumber": version.number,
|
||||
"launchTemplateId": template.id,
|
||||
"launchTemplateName": template.name
|
||||
})
|
||||
|
||||
return pretty_xml(tree)
|
||||
|
||||
def create_launch_template_version(self):
|
||||
name = self._get_param('LaunchTemplateName')
|
||||
tmpl_id = self._get_param('LaunchTemplateId')
|
||||
if name:
|
||||
template = self.ec2_backend.get_launch_template_by_name(name)
|
||||
if tmpl_id:
|
||||
template = self.ec2_backend.get_launch_template(tmpl_id)
|
||||
|
||||
version_description = self._get_param('VersionDescription')
|
||||
|
||||
raw_template_data = self._get_dict_param('LaunchTemplateData.')
|
||||
template_data = parse_object(raw_template_data)
|
||||
|
||||
if self.is_not_dryrun('CreateLaunchTemplate'):
|
||||
version = template.create_version(template_data, version_description)
|
||||
|
||||
tree = xml_root("CreateLaunchTemplateVersionResponse")
|
||||
xml_serialize(tree, "launchTemplateVersion", {
|
||||
"createTime": version.create_time,
|
||||
"createdBy": "arn:aws:iam::{OWNER_ID}:root".format(OWNER_ID=OWNER_ID),
|
||||
"defaultVersion": template.is_default(version),
|
||||
"launchTemplateData": version.data,
|
||||
"launchTemplateId": template.id,
|
||||
"launchTemplateName": template.name,
|
||||
"versionDescription": version.description,
|
||||
"versionNumber": version.number,
|
||||
})
|
||||
return pretty_xml(tree)
|
||||
|
||||
# def delete_launch_template(self):
|
||||
# pass
|
||||
|
||||
# def delete_launch_template_versions(self):
|
||||
# pass
|
||||
|
||||
def describe_launch_template_versions(self):
|
||||
name = self._get_param('LaunchTemplateName')
|
||||
template_id = self._get_param('LaunchTemplateId')
|
||||
if name:
|
||||
template = self.ec2_backend.get_launch_template_by_name(name)
|
||||
if template_id:
|
||||
template = self.ec2_backend.get_launch_template(template_id)
|
||||
|
||||
max_results = self._get_int_param("MaxResults", 15)
|
||||
versions = self._get_multi_param("LaunchTemplateVersion")
|
||||
min_version = self._get_int_param("MinVersion")
|
||||
max_version = self._get_int_param("MaxVersion")
|
||||
|
||||
filters = filters_from_querystring(self.querystring)
|
||||
if filters:
|
||||
raise FilterNotImplementedError("all filters", "DescribeLaunchTemplateVersions")
|
||||
|
||||
if self.is_not_dryrun('DescribeLaunchTemplateVersions'):
|
||||
tree = ElementTree.Element("DescribeLaunchTemplateVersionsResponse", {
|
||||
"xmlns": "http://ec2.amazonaws.com/doc/2016-11-15/",
|
||||
})
|
||||
request_id = ElementTree.SubElement(tree, "requestId")
|
||||
request_id.text = "65cadec1-b364-4354-8ca8-4176dexample"
|
||||
|
||||
versions_node = ElementTree.SubElement(tree, "launchTemplateVersionSet")
|
||||
|
||||
ret_versions = []
|
||||
if versions:
|
||||
for v in versions:
|
||||
ret_versions.append(template.get_version(int(v)))
|
||||
elif min_version:
|
||||
if max_version:
|
||||
vMax = max_version
|
||||
else:
|
||||
vMax = min_version + max_results
|
||||
|
||||
vMin = min_version - 1
|
||||
ret_versions = template.versions[vMin:vMax]
|
||||
elif max_version:
|
||||
vMax = max_version
|
||||
ret_versions = template.versions[:vMax]
|
||||
else:
|
||||
ret_versions = template.versions
|
||||
|
||||
ret_versions = ret_versions[:max_results]
|
||||
|
||||
for version in ret_versions:
|
||||
xml_serialize(versions_node, "item", {
|
||||
"createTime": version.create_time,
|
||||
"createdBy": "arn:aws:iam::{OWNER_ID}:root".format(OWNER_ID=OWNER_ID),
|
||||
"defaultVersion": True,
|
||||
"launchTemplateData": version.data,
|
||||
"launchTemplateId": template.id,
|
||||
"launchTemplateName": template.name,
|
||||
"versionDescription": version.description,
|
||||
"versionNumber": version.number,
|
||||
})
|
||||
|
||||
return pretty_xml(tree)
|
||||
|
||||
def describe_launch_templates(self):
|
||||
max_results = self._get_int_param("MaxResults", 15)
|
||||
template_names = self._get_multi_param("LaunchTemplateName")
|
||||
template_ids = self._get_multi_param("LaunchTemplateId")
|
||||
filters = filters_from_querystring(self.querystring)
|
||||
|
||||
if self.is_not_dryrun("DescribeLaunchTemplates"):
|
||||
tree = ElementTree.Element("DescribeLaunchTemplatesResponse")
|
||||
templates_node = ElementTree.SubElement(tree, "launchTemplates")
|
||||
|
||||
templates = self.ec2_backend.get_launch_templates(template_names=template_names, template_ids=template_ids, filters=filters)
|
||||
|
||||
templates = templates[:max_results]
|
||||
|
||||
for template in templates:
|
||||
xml_serialize(templates_node, "item", {
|
||||
"createTime": template.create_time,
|
||||
"createdBy": "arn:aws:iam::{OWNER_ID}:root".format(OWNER_ID=OWNER_ID),
|
||||
"defaultVersionNumber": template.default_version_number,
|
||||
"latestVersionNumber": template.latest_version_number,
|
||||
"launchTemplateId": template.id,
|
||||
"launchTemplateName": template.name,
|
||||
})
|
||||
|
||||
return pretty_xml(tree)
|
||||
|
||||
# def modify_launch_template(self):
|
||||
# pass
|
@ -20,6 +20,7 @@ EC2_RESOURCE_TO_PREFIX = {
|
||||
'image': 'ami',
|
||||
'instance': 'i',
|
||||
'internet-gateway': 'igw',
|
||||
'launch-template': 'lt',
|
||||
'nat-gateway': 'nat',
|
||||
'network-acl': 'acl',
|
||||
'network-acl-subnet-assoc': 'aclassoc',
|
||||
@ -161,6 +162,10 @@ def random_nat_gateway_id():
|
||||
return random_id(prefix=EC2_RESOURCE_TO_PREFIX['nat-gateway'], size=17)
|
||||
|
||||
|
||||
def random_launch_template_id():
|
||||
return random_id(prefix=EC2_RESOURCE_TO_PREFIX['launch-template'], size=17)
|
||||
|
||||
|
||||
def random_public_ip():
|
||||
return '54.214.{0}.{1}'.format(random.choice(range(255)),
|
||||
random.choice(range(255)))
|
||||
|
@ -8,6 +8,7 @@ import boto3
|
||||
import pytz
|
||||
from moto.core.exceptions import JsonRESTError
|
||||
from moto.core import BaseBackend, BaseModel
|
||||
from moto.core.utils import unix_time
|
||||
from moto.ec2 import ec2_backends
|
||||
from copy import copy
|
||||
|
||||
@ -231,9 +232,9 @@ class Service(BaseObject):
|
||||
|
||||
for deployment in response_object['deployments']:
|
||||
if isinstance(deployment['createdAt'], datetime):
|
||||
deployment['createdAt'] = deployment['createdAt'].isoformat()
|
||||
deployment['createdAt'] = unix_time(deployment['createdAt'].replace(tzinfo=None))
|
||||
if isinstance(deployment['updatedAt'], datetime):
|
||||
deployment['updatedAt'] = deployment['updatedAt'].isoformat()
|
||||
deployment['updatedAt'] = unix_time(deployment['updatedAt'].replace(tzinfo=None))
|
||||
|
||||
return response_object
|
||||
|
||||
|
@ -2,9 +2,11 @@ from __future__ import unicode_literals
|
||||
|
||||
import datetime
|
||||
import re
|
||||
from jinja2 import Template
|
||||
from moto.compat import OrderedDict
|
||||
from moto.core.exceptions import RESTError
|
||||
from moto.core import BaseBackend, BaseModel
|
||||
from moto.core.utils import camelcase_to_underscores
|
||||
from moto.ec2.models import ec2_backends
|
||||
from moto.acm.models import acm_backends
|
||||
from .utils import make_arn_for_target_group
|
||||
@ -213,13 +215,12 @@ class FakeListener(BaseModel):
|
||||
action_type = action['Type']
|
||||
if action_type == 'forward':
|
||||
default_actions.append({'type': action_type, 'target_group_arn': action['TargetGroupArn']})
|
||||
elif action_type == 'redirect':
|
||||
redirect_action = {'type': action_type, }
|
||||
for redirect_config_key, redirect_config_value in action['RedirectConfig'].items():
|
||||
elif action_type in ['redirect', 'authenticate-cognito']:
|
||||
redirect_action = {'type': action_type}
|
||||
key = 'RedirectConfig' if action_type == 'redirect' else 'AuthenticateCognitoConfig'
|
||||
for redirect_config_key, redirect_config_value in action[key].items():
|
||||
# need to match the output of _get_list_prefix
|
||||
if redirect_config_key == 'StatusCode':
|
||||
redirect_config_key = 'status_code'
|
||||
redirect_action['redirect_config._' + redirect_config_key.lower()] = redirect_config_value
|
||||
redirect_action[camelcase_to_underscores(key) + '._' + camelcase_to_underscores(redirect_config_key)] = redirect_config_value
|
||||
default_actions.append(redirect_action)
|
||||
else:
|
||||
raise InvalidActionTypeError(action_type, i + 1)
|
||||
@ -231,6 +232,32 @@ class FakeListener(BaseModel):
|
||||
return listener
|
||||
|
||||
|
||||
class FakeAction(BaseModel):
|
||||
def __init__(self, data):
|
||||
self.data = data
|
||||
self.type = data.get("type")
|
||||
|
||||
def to_xml(self):
|
||||
template = Template("""<Type>{{ action.type }}</Type>
|
||||
{% if action.type == "forward" %}
|
||||
<TargetGroupArn>{{ action.data["target_group_arn"] }}</TargetGroupArn>
|
||||
{% elif action.type == "redirect" %}
|
||||
<RedirectConfig>
|
||||
<Protocol>{{ action.data["redirect_config._protocol"] }}</Protocol>
|
||||
<Port>{{ action.data["redirect_config._port"] }}</Port>
|
||||
<StatusCode>{{ action.data["redirect_config._status_code"] }}</StatusCode>
|
||||
</RedirectConfig>
|
||||
{% elif action.type == "authenticate-cognito" %}
|
||||
<AuthenticateCognitoConfig>
|
||||
<UserPoolArn>{{ action.data["authenticate_cognito_config._user_pool_arn"] }}</UserPoolArn>
|
||||
<UserPoolClientId>{{ action.data["authenticate_cognito_config._user_pool_client_id"] }}</UserPoolClientId>
|
||||
<UserPoolDomain>{{ action.data["authenticate_cognito_config._user_pool_domain"] }}</UserPoolDomain>
|
||||
</AuthenticateCognitoConfig>
|
||||
{% endif %}
|
||||
""")
|
||||
return template.render(action=self)
|
||||
|
||||
|
||||
class FakeRule(BaseModel):
|
||||
|
||||
def __init__(self, listener_arn, conditions, priority, actions, is_default):
|
||||
@ -402,6 +429,7 @@ class ELBv2Backend(BaseBackend):
|
||||
return new_load_balancer
|
||||
|
||||
def create_rule(self, listener_arn, conditions, priority, actions):
|
||||
actions = [FakeAction(action) for action in actions]
|
||||
listeners = self.describe_listeners(None, [listener_arn])
|
||||
if not listeners:
|
||||
raise ListenerNotFoundError()
|
||||
@ -429,20 +457,7 @@ class ELBv2Backend(BaseBackend):
|
||||
if rule.priority == priority:
|
||||
raise PriorityInUseError()
|
||||
|
||||
# validate Actions
|
||||
target_group_arns = [target_group.arn for target_group in self.target_groups.values()]
|
||||
for i, action in enumerate(actions):
|
||||
index = i + 1
|
||||
action_type = action['type']
|
||||
if action_type == 'forward':
|
||||
action_target_group_arn = action['target_group_arn']
|
||||
if action_target_group_arn not in target_group_arns:
|
||||
raise ActionTargetGroupNotFoundError(action_target_group_arn)
|
||||
elif action_type == 'redirect':
|
||||
# nothing to do
|
||||
pass
|
||||
else:
|
||||
raise InvalidActionTypeError(action_type, index)
|
||||
self._validate_actions(actions)
|
||||
|
||||
# TODO: check for error 'TooManyRegistrationsForTargetId'
|
||||
# TODO: check for error 'TooManyRules'
|
||||
@ -452,6 +467,21 @@ class ELBv2Backend(BaseBackend):
|
||||
listener.register(rule)
|
||||
return [rule]
|
||||
|
||||
def _validate_actions(self, actions):
|
||||
# validate Actions
|
||||
target_group_arns = [target_group.arn for target_group in self.target_groups.values()]
|
||||
for i, action in enumerate(actions):
|
||||
index = i + 1
|
||||
action_type = action.type
|
||||
if action_type == 'forward':
|
||||
action_target_group_arn = action.data['target_group_arn']
|
||||
if action_target_group_arn not in target_group_arns:
|
||||
raise ActionTargetGroupNotFoundError(action_target_group_arn)
|
||||
elif action_type in ['redirect', 'authenticate-cognito']:
|
||||
pass
|
||||
else:
|
||||
raise InvalidActionTypeError(action_type, index)
|
||||
|
||||
def create_target_group(self, name, **kwargs):
|
||||
if len(name) > 32:
|
||||
raise InvalidTargetGroupNameError(
|
||||
@ -495,26 +525,22 @@ class ELBv2Backend(BaseBackend):
|
||||
return target_group
|
||||
|
||||
def create_listener(self, load_balancer_arn, protocol, port, ssl_policy, certificate, default_actions):
|
||||
default_actions = [FakeAction(action) for action in default_actions]
|
||||
balancer = self.load_balancers.get(load_balancer_arn)
|
||||
if balancer is None:
|
||||
raise LoadBalancerNotFoundError()
|
||||
if port in balancer.listeners:
|
||||
raise DuplicateListenerError()
|
||||
|
||||
self._validate_actions(default_actions)
|
||||
|
||||
arn = load_balancer_arn.replace(':loadbalancer/', ':listener/') + "/%s%s" % (port, id(self))
|
||||
listener = FakeListener(load_balancer_arn, arn, protocol, port, ssl_policy, certificate, default_actions)
|
||||
balancer.listeners[listener.arn] = listener
|
||||
for i, action in enumerate(default_actions):
|
||||
action_type = action['type']
|
||||
if action_type == 'forward':
|
||||
if action['target_group_arn'] in self.target_groups.keys():
|
||||
target_group = self.target_groups[action['target_group_arn']]
|
||||
target_group.load_balancer_arns.append(load_balancer_arn)
|
||||
elif action_type == 'redirect':
|
||||
# nothing to do
|
||||
pass
|
||||
else:
|
||||
raise InvalidActionTypeError(action_type, i + 1)
|
||||
for action in default_actions:
|
||||
if action.type == 'forward':
|
||||
target_group = self.target_groups[action.data['target_group_arn']]
|
||||
target_group.load_balancer_arns.append(load_balancer_arn)
|
||||
|
||||
return listener
|
||||
|
||||
@ -648,6 +674,7 @@ class ELBv2Backend(BaseBackend):
|
||||
raise ListenerNotFoundError()
|
||||
|
||||
def modify_rule(self, rule_arn, conditions, actions):
|
||||
actions = [FakeAction(action) for action in actions]
|
||||
# if conditions or actions is empty list, do not update the attributes
|
||||
if not conditions and not actions:
|
||||
raise InvalidModifyRuleArgumentsError()
|
||||
@ -673,20 +700,7 @@ class ELBv2Backend(BaseBackend):
|
||||
# TODO: check pattern of value for 'path-pattern'
|
||||
|
||||
# validate Actions
|
||||
target_group_arns = [target_group.arn for target_group in self.target_groups.values()]
|
||||
if actions:
|
||||
for i, action in enumerate(actions):
|
||||
index = i + 1
|
||||
action_type = action['type']
|
||||
if action_type == 'forward':
|
||||
action_target_group_arn = action['target_group_arn']
|
||||
if action_target_group_arn not in target_group_arns:
|
||||
raise ActionTargetGroupNotFoundError(action_target_group_arn)
|
||||
elif action_type == 'redirect':
|
||||
# nothing to do
|
||||
pass
|
||||
else:
|
||||
raise InvalidActionTypeError(action_type, index)
|
||||
self._validate_actions(actions)
|
||||
|
||||
# TODO: check for error 'TooManyRegistrationsForTargetId'
|
||||
# TODO: check for error 'TooManyRules'
|
||||
@ -851,6 +865,7 @@ class ELBv2Backend(BaseBackend):
|
||||
return target_group
|
||||
|
||||
def modify_listener(self, arn, port=None, protocol=None, ssl_policy=None, certificates=None, default_actions=None):
|
||||
default_actions = [FakeAction(action) for action in default_actions]
|
||||
for load_balancer in self.load_balancers.values():
|
||||
if arn in load_balancer.listeners:
|
||||
break
|
||||
@ -917,7 +932,7 @@ class ELBv2Backend(BaseBackend):
|
||||
for listener in load_balancer.listeners.values():
|
||||
for rule in listener.rules:
|
||||
for action in rule.actions:
|
||||
if action.get('target_group_arn') == target_group_arn:
|
||||
if action.data.get('target_group_arn') == target_group_arn:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
@ -775,16 +775,7 @@ CREATE_LISTENER_TEMPLATE = """<CreateListenerResponse xmlns="http://elasticloadb
|
||||
<DefaultActions>
|
||||
{% for action in listener.default_actions %}
|
||||
<member>
|
||||
<Type>{{ action.type }}</Type>
|
||||
{% if action["type"] == "forward" %}
|
||||
<TargetGroupArn>{{ action["target_group_arn"] }}</TargetGroupArn>
|
||||
{% elif action["type"] == "redirect" %}
|
||||
<RedirectConfig>
|
||||
<Protocol>{{ action["redirect_config._protocol"] }}</Protocol>
|
||||
<Port>{{ action["redirect_config._port"] }}</Port>
|
||||
<StatusCode>{{ action["redirect_config._status_code"] }}</StatusCode>
|
||||
</RedirectConfig>
|
||||
{% endif %}
|
||||
{{ action.to_xml() }}
|
||||
</member>
|
||||
{% endfor %}
|
||||
</DefaultActions>
|
||||
@ -888,16 +879,7 @@ DESCRIBE_RULES_TEMPLATE = """<DescribeRulesResponse xmlns="http://elasticloadbal
|
||||
<Actions>
|
||||
{% for action in rule.actions %}
|
||||
<member>
|
||||
<Type>{{ action["type"] }}</Type>
|
||||
{% if action["type"] == "forward" %}
|
||||
<TargetGroupArn>{{ action["target_group_arn"] }}</TargetGroupArn>
|
||||
{% elif action["type"] == "redirect" %}
|
||||
<RedirectConfig>
|
||||
<Protocol>{{ action["redirect_config._protocol"] }}</Protocol>
|
||||
<Port>{{ action["redirect_config._port"] }}</Port>
|
||||
<StatusCode>{{ action["redirect_config._status_code"] }}</StatusCode>
|
||||
</RedirectConfig>
|
||||
{% endif %}
|
||||
{{ action.to_xml() }}
|
||||
</member>
|
||||
{% endfor %}
|
||||
</Actions>
|
||||
@ -989,16 +971,7 @@ DESCRIBE_LISTENERS_TEMPLATE = """<DescribeLoadBalancersResponse xmlns="http://el
|
||||
<DefaultActions>
|
||||
{% for action in listener.default_actions %}
|
||||
<member>
|
||||
<Type>{{ action.type }}</Type>
|
||||
{% if action["type"] == "forward" %}
|
||||
<TargetGroupArn>{{ action["target_group_arn"] }}</TargetGroupArn>m
|
||||
{% elif action["type"] == "redirect" %}
|
||||
<RedirectConfig>
|
||||
<Protocol>{{ action["redirect_config._protocol"] }}</Protocol>
|
||||
<Port>{{ action["redirect_config._port"] }}</Port>
|
||||
<StatusCode>{{ action["redirect_config._status_code"] }}</StatusCode>
|
||||
</RedirectConfig>
|
||||
{% endif %}
|
||||
{{ action.to_xml() }}
|
||||
</member>
|
||||
{% endfor %}
|
||||
</DefaultActions>
|
||||
@ -1048,8 +1021,7 @@ MODIFY_RULE_TEMPLATE = """<ModifyRuleResponse xmlns="http://elasticloadbalancing
|
||||
<Actions>
|
||||
{% for action in rule.actions %}
|
||||
<member>
|
||||
<Type>{{ action["type"] }}</Type>
|
||||
<TargetGroupArn>{{ action["target_group_arn"] }}</TargetGroupArn>
|
||||
{{ action.to_xml() }}
|
||||
</member>
|
||||
{% endfor %}
|
||||
</Actions>
|
||||
@ -1432,16 +1404,7 @@ MODIFY_LISTENER_TEMPLATE = """<ModifyListenerResponse xmlns="http://elasticloadb
|
||||
<DefaultActions>
|
||||
{% for action in listener.default_actions %}
|
||||
<member>
|
||||
<Type>{{ action.type }}</Type>
|
||||
{% if action["type"] == "forward" %}
|
||||
<TargetGroupArn>{{ action["target_group_arn"] }}</TargetGroupArn>
|
||||
{% elif action["type"] == "redirect" %}
|
||||
<RedirectConfig>
|
||||
<Protocol>{{ action["redirect_config._protocol"] }}</Protocol>
|
||||
<Port>{{ action["redirect_config._port"] }}</Port>
|
||||
<StatusCode>{{ action["redirect_config._status_code"] }}</StatusCode>
|
||||
</RedirectConfig>
|
||||
{% endif %}
|
||||
{{ action.to_xml() }}
|
||||
</member>
|
||||
{% endfor %}
|
||||
</DefaultActions>
|
||||
|
@ -161,7 +161,7 @@ class InlinePolicy(Policy):
|
||||
|
||||
class Role(BaseModel):
|
||||
|
||||
def __init__(self, role_id, name, assume_role_policy_document, path, permissions_boundary):
|
||||
def __init__(self, role_id, name, assume_role_policy_document, path, permissions_boundary, description, tags):
|
||||
self.id = role_id
|
||||
self.name = name
|
||||
self.assume_role_policy_document = assume_role_policy_document
|
||||
@ -169,8 +169,8 @@ class Role(BaseModel):
|
||||
self.policies = {}
|
||||
self.managed_policies = {}
|
||||
self.create_date = datetime.utcnow()
|
||||
self.tags = {}
|
||||
self.description = ""
|
||||
self.tags = tags
|
||||
self.description = description
|
||||
self.permissions_boundary = permissions_boundary
|
||||
|
||||
@property
|
||||
@ -185,7 +185,9 @@ class Role(BaseModel):
|
||||
role_name=resource_name,
|
||||
assume_role_policy_document=properties['AssumeRolePolicyDocument'],
|
||||
path=properties.get('Path', '/'),
|
||||
permissions_boundary=properties.get('PermissionsBoundary', '')
|
||||
permissions_boundary=properties.get('PermissionsBoundary', ''),
|
||||
description=properties.get('Description', ''),
|
||||
tags=properties.get('Tags', {})
|
||||
)
|
||||
|
||||
policies = properties.get('Policies', [])
|
||||
@ -635,12 +637,13 @@ class IAMBackend(BaseBackend):
|
||||
|
||||
return policies, marker
|
||||
|
||||
def create_role(self, role_name, assume_role_policy_document, path, permissions_boundary):
|
||||
def create_role(self, role_name, assume_role_policy_document, path, permissions_boundary, description, tags):
|
||||
role_id = random_resource_id()
|
||||
if permissions_boundary and not self.policy_arn_regex.match(permissions_boundary):
|
||||
raise RESTError('InvalidParameterValue', 'Value ({}) for parameter PermissionsBoundary is invalid.'.format(permissions_boundary))
|
||||
|
||||
role = Role(role_id, role_name, assume_role_policy_document, path, permissions_boundary)
|
||||
clean_tags = self._tag_verification(tags)
|
||||
role = Role(role_id, role_name, assume_role_policy_document, path, permissions_boundary, description, clean_tags)
|
||||
self.roles[role_id] = role
|
||||
return role
|
||||
|
||||
@ -691,10 +694,26 @@ class IAMBackend(BaseBackend):
|
||||
role = self.get_role(role_name)
|
||||
return role.policies.keys()
|
||||
|
||||
def _tag_verification(self, tags):
|
||||
if len(tags) > 50:
|
||||
raise TooManyTags(tags)
|
||||
|
||||
tag_keys = {}
|
||||
for tag in tags:
|
||||
# Need to index by the lowercase tag key since the keys are case insensitive, but their case is retained.
|
||||
ref_key = tag['Key'].lower()
|
||||
self._check_tag_duplicate(tag_keys, ref_key)
|
||||
self._validate_tag_key(tag['Key'])
|
||||
if len(tag['Value']) > 256:
|
||||
raise TagValueTooBig(tag['Value'])
|
||||
|
||||
tag_keys[ref_key] = tag
|
||||
|
||||
return tag_keys
|
||||
|
||||
def _validate_tag_key(self, tag_key, exception_param='tags.X.member.key'):
|
||||
"""Validates the tag key.
|
||||
|
||||
:param all_tags: Dict to check if there is a duplicate tag.
|
||||
:param tag_key: The tag key to check against.
|
||||
:param exception_param: The exception parameter to send over to help format the message. This is to reflect
|
||||
the difference between the tag and untag APIs.
|
||||
@ -741,23 +760,9 @@ class IAMBackend(BaseBackend):
|
||||
return tags, marker
|
||||
|
||||
def tag_role(self, role_name, tags):
|
||||
if len(tags) > 50:
|
||||
raise TooManyTags(tags)
|
||||
|
||||
clean_tags = self._tag_verification(tags)
|
||||
role = self.get_role(role_name)
|
||||
|
||||
tag_keys = {}
|
||||
for tag in tags:
|
||||
# Need to index by the lowercase tag key since the keys are case insensitive, but their case is retained.
|
||||
ref_key = tag['Key'].lower()
|
||||
self._check_tag_duplicate(tag_keys, ref_key)
|
||||
self._validate_tag_key(tag['Key'])
|
||||
if len(tag['Value']) > 256:
|
||||
raise TagValueTooBig(tag['Value'])
|
||||
|
||||
tag_keys[ref_key] = tag
|
||||
|
||||
role.tags.update(tag_keys)
|
||||
role.tags.update(clean_tags)
|
||||
|
||||
def untag_role(self, role_name, tag_keys):
|
||||
if len(tag_keys) > 50:
|
||||
|
@ -178,9 +178,11 @@ class IamResponse(BaseResponse):
|
||||
'AssumeRolePolicyDocument')
|
||||
permissions_boundary = self._get_param(
|
||||
'PermissionsBoundary')
|
||||
description = self._get_param('Description')
|
||||
tags = self._get_multi_param('Tags.member')
|
||||
|
||||
role = iam_backend.create_role(
|
||||
role_name, assume_role_policy_document, path, permissions_boundary)
|
||||
role_name, assume_role_policy_document, path, permissions_boundary, description, tags)
|
||||
template = self.response_template(CREATE_ROLE_TEMPLATE)
|
||||
return template.render(role=role)
|
||||
|
||||
@ -1002,6 +1004,7 @@ CREATE_ROLE_TEMPLATE = """<CreateRoleResponse xmlns="https://iam.amazonaws.com/d
|
||||
<Arn>{{ role.arn }}</Arn>
|
||||
<RoleName>{{ role.name }}</RoleName>
|
||||
<AssumeRolePolicyDocument>{{ role.assume_role_policy_document }}</AssumeRolePolicyDocument>
|
||||
<Description>{{role.description}}</Description>
|
||||
<CreateDate>{{ role.created_iso_8601 }}</CreateDate>
|
||||
<RoleId>{{ role.id }}</RoleId>
|
||||
{% if role.permissions_boundary %}
|
||||
@ -1010,6 +1013,16 @@ CREATE_ROLE_TEMPLATE = """<CreateRoleResponse xmlns="https://iam.amazonaws.com/d
|
||||
<PermissionsBoundaryArn>{{ role.permissions_boundary }}</PermissionsBoundaryArn>
|
||||
</PermissionsBoundary>
|
||||
{% endif %}
|
||||
{% if role.tags %}
|
||||
<Tags>
|
||||
{% for tag in role.get_tags() %}
|
||||
<member>
|
||||
<Key>{{ tag['Key'] }}</Key>
|
||||
<Value>{{ tag['Value'] }}</Value>
|
||||
</member>
|
||||
{% endfor %}
|
||||
</Tags>
|
||||
{% endif %}
|
||||
</Role>
|
||||
</CreateRoleResult>
|
||||
<ResponseMetadata>
|
||||
@ -1043,6 +1056,7 @@ UPDATE_ROLE_DESCRIPTION_TEMPLATE = """<UpdateRoleDescriptionResponse xmlns="http
|
||||
<Arn>{{ role.arn }}</Arn>
|
||||
<RoleName>{{ role.name }}</RoleName>
|
||||
<AssumeRolePolicyDocument>{{ role.assume_role_policy_document }}</AssumeRolePolicyDocument>
|
||||
<Description>{{role.description}}</Description>
|
||||
<CreateDate>{{ role.created_iso_8601 }}</CreateDate>
|
||||
<RoleId>{{ role.id }}</RoleId>
|
||||
{% if role.tags %}
|
||||
@ -1069,6 +1083,7 @@ GET_ROLE_TEMPLATE = """<GetRoleResponse xmlns="https://iam.amazonaws.com/doc/201
|
||||
<Arn>{{ role.arn }}</Arn>
|
||||
<RoleName>{{ role.name }}</RoleName>
|
||||
<AssumeRolePolicyDocument>{{ role.assume_role_policy_document }}</AssumeRolePolicyDocument>
|
||||
<Description>{{role.description}}</Description>
|
||||
<CreateDate>{{ role.created_iso_8601 }}</CreateDate>
|
||||
<RoleId>{{ role.id }}</RoleId>
|
||||
{% if role.tags %}
|
||||
@ -1759,8 +1774,15 @@ GET_ACCOUNT_AUTHORIZATION_DETAILS_TEMPLATE = """<GetAccountAuthorizationDetailsR
|
||||
<Arn>{{ role.arn }}</Arn>
|
||||
<RoleName>{{ role.name }}</RoleName>
|
||||
<AssumeRolePolicyDocument>{{ role.assume_role_policy_document }}</AssumeRolePolicyDocument>
|
||||
<Description>{{role.description}}</Description>
|
||||
<CreateDate>{{ role.created_iso_8601 }}</CreateDate>
|
||||
<RoleId>{{ role.id }}</RoleId>
|
||||
{% if role.permissions_boundary %}
|
||||
<PermissionsBoundary>
|
||||
<PermissionsBoundaryType>PermissionsBoundaryPolicy</PermissionsBoundaryType>
|
||||
<PermissionsBoundaryArn>{{ role.permissions_boundary }}</PermissionsBoundaryArn>
|
||||
</PermissionsBoundary>
|
||||
{% endif %}
|
||||
</member>
|
||||
{% endfor %}
|
||||
</Roles>
|
||||
|
@ -238,7 +238,7 @@ class KmsResponse(BaseResponse):
|
||||
|
||||
value = self.parameters.get("CiphertextBlob")
|
||||
try:
|
||||
return json.dumps({"Plaintext": base64.b64decode(value).decode("utf-8")})
|
||||
return json.dumps({"Plaintext": base64.b64decode(value).decode("utf-8"), 'KeyId': 'key_id'})
|
||||
except UnicodeDecodeError:
|
||||
# Generate data key will produce random bytes which when decrypted is still returned as base64
|
||||
return json.dumps({"Plaintext": value})
|
||||
|
@ -98,17 +98,29 @@ class LogStream:
|
||||
|
||||
return True
|
||||
|
||||
def get_paging_token_from_index(index, back=False):
|
||||
if index is not None:
|
||||
return "b/{:056d}".format(index) if back else "f/{:056d}".format(index)
|
||||
return 0
|
||||
|
||||
def get_index_from_paging_token(token):
|
||||
if token is not None:
|
||||
return int(token[2:])
|
||||
return 0
|
||||
|
||||
events = sorted(filter(filter_func, self.events), key=lambda event: event.timestamp, reverse=start_from_head)
|
||||
back_token = next_token
|
||||
if next_token is None:
|
||||
next_token = 0
|
||||
next_index = get_index_from_paging_token(next_token)
|
||||
back_index = next_index
|
||||
|
||||
events_page = [event.to_response_dict() for event in events[next_token: next_token + limit]]
|
||||
next_token += limit
|
||||
if next_token >= len(self.events):
|
||||
next_token = None
|
||||
events_page = [event.to_response_dict() for event in events[next_index: next_index + limit]]
|
||||
if next_index + limit < len(self.events):
|
||||
next_index += limit
|
||||
|
||||
return events_page, back_token, next_token
|
||||
back_index -= limit
|
||||
if back_index <= 0:
|
||||
back_index = 0
|
||||
|
||||
return events_page, get_paging_token_from_index(back_index, True), get_paging_token_from_index(next_index)
|
||||
|
||||
def filter_log_events(self, log_group_name, log_stream_names, start_time, end_time, limit, next_token, filter_pattern, interleaved):
|
||||
def filter_func(event):
|
||||
|
@ -2,6 +2,7 @@ from __future__ import unicode_literals
|
||||
|
||||
import datetime
|
||||
import re
|
||||
import json
|
||||
|
||||
from moto.core import BaseBackend, BaseModel
|
||||
from moto.core.exceptions import RESTError
|
||||
@ -151,7 +152,6 @@ class FakeRoot(FakeOrganizationalUnit):
|
||||
class FakeServiceControlPolicy(BaseModel):
|
||||
|
||||
def __init__(self, organization, **kwargs):
|
||||
self.type = 'POLICY'
|
||||
self.content = kwargs.get('Content')
|
||||
self.description = kwargs.get('Description')
|
||||
self.name = kwargs.get('Name')
|
||||
@ -197,7 +197,38 @@ class OrganizationsBackend(BaseBackend):
|
||||
|
||||
def create_organization(self, **kwargs):
|
||||
self.org = FakeOrganization(kwargs['FeatureSet'])
|
||||
self.ou.append(FakeRoot(self.org))
|
||||
root_ou = FakeRoot(self.org)
|
||||
self.ou.append(root_ou)
|
||||
master_account = FakeAccount(
|
||||
self.org,
|
||||
AccountName='master',
|
||||
Email=self.org.master_account_email,
|
||||
)
|
||||
master_account.id = self.org.master_account_id
|
||||
self.accounts.append(master_account)
|
||||
default_policy = FakeServiceControlPolicy(
|
||||
self.org,
|
||||
Name='FullAWSAccess',
|
||||
Description='Allows access to every operation',
|
||||
Type='SERVICE_CONTROL_POLICY',
|
||||
Content=json.dumps(
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": "*",
|
||||
"Resource": "*"
|
||||
}
|
||||
]
|
||||
}
|
||||
)
|
||||
)
|
||||
default_policy.id = utils.DEFAULT_POLICY_ID
|
||||
default_policy.aws_managed = True
|
||||
self.policies.append(default_policy)
|
||||
self.attach_policy(PolicyId=default_policy.id, TargetId=root_ou.id)
|
||||
self.attach_policy(PolicyId=default_policy.id, TargetId=master_account.id)
|
||||
return self.org.describe()
|
||||
|
||||
def describe_organization(self):
|
||||
@ -216,6 +247,7 @@ class OrganizationsBackend(BaseBackend):
|
||||
def create_organizational_unit(self, **kwargs):
|
||||
new_ou = FakeOrganizationalUnit(self.org, **kwargs)
|
||||
self.ou.append(new_ou)
|
||||
self.attach_policy(PolicyId=utils.DEFAULT_POLICY_ID, TargetId=new_ou.id)
|
||||
return new_ou.describe()
|
||||
|
||||
def get_organizational_unit_by_id(self, ou_id):
|
||||
@ -258,6 +290,7 @@ class OrganizationsBackend(BaseBackend):
|
||||
def create_account(self, **kwargs):
|
||||
new_account = FakeAccount(self.org, **kwargs)
|
||||
self.accounts.append(new_account)
|
||||
self.attach_policy(PolicyId=utils.DEFAULT_POLICY_ID, TargetId=new_account.id)
|
||||
return new_account.create_account_status
|
||||
|
||||
def get_account_by_id(self, account_id):
|
||||
@ -358,8 +391,7 @@ class OrganizationsBackend(BaseBackend):
|
||||
|
||||
def attach_policy(self, **kwargs):
|
||||
policy = next((p for p in self.policies if p.id == kwargs['PolicyId']), None)
|
||||
if (re.compile(utils.ROOT_ID_REGEX).match(kwargs['TargetId']) or
|
||||
re.compile(utils.OU_ID_REGEX).match(kwargs['TargetId'])):
|
||||
if (re.compile(utils.ROOT_ID_REGEX).match(kwargs['TargetId']) or re.compile(utils.OU_ID_REGEX).match(kwargs['TargetId'])):
|
||||
ou = next((ou for ou in self.ou if ou.id == kwargs['TargetId']), None)
|
||||
if ou is not None:
|
||||
if ou not in ou.attached_policies:
|
||||
|
@ -4,7 +4,8 @@ import random
|
||||
import string
|
||||
|
||||
MASTER_ACCOUNT_ID = '123456789012'
|
||||
MASTER_ACCOUNT_EMAIL = 'fakeorg@moto-example.com'
|
||||
MASTER_ACCOUNT_EMAIL = 'master@example.com'
|
||||
DEFAULT_POLICY_ID = 'p-FullAWSAccess'
|
||||
ORGANIZATION_ARN_FORMAT = 'arn:aws:organizations::{0}:organization/{1}'
|
||||
MASTER_ACCOUNT_ARN_FORMAT = 'arn:aws:organizations::{0}:account/{1}/{0}'
|
||||
ACCOUNT_ARN_FORMAT = 'arn:aws:organizations::{0}:account/{1}/{2}'
|
||||
@ -26,7 +27,7 @@ ROOT_ID_REGEX = r'r-[a-z0-9]{%s}' % ROOT_ID_SIZE
|
||||
OU_ID_REGEX = r'ou-[a-z0-9]{%s}-[a-z0-9]{%s}' % (ROOT_ID_SIZE, OU_ID_SUFFIX_SIZE)
|
||||
ACCOUNT_ID_REGEX = r'[0-9]{%s}' % ACCOUNT_ID_SIZE
|
||||
CREATE_ACCOUNT_STATUS_ID_REGEX = r'car-[a-z0-9]{%s}' % CREATE_ACCOUNT_STATUS_ID_SIZE
|
||||
SCP_ID_REGEX = r'p-[a-z0-9]{%s}' % SCP_ID_SIZE
|
||||
SCP_ID_REGEX = r'%s|p-[a-z0-9]{%s}' % (DEFAULT_POLICY_ID, SCP_ID_SIZE)
|
||||
|
||||
|
||||
def make_random_org_id():
|
||||
|
@ -78,7 +78,7 @@ class Cluster(TaggableResourceMixin, BaseModel):
|
||||
super(Cluster, self).__init__(region_name, tags)
|
||||
self.redshift_backend = redshift_backend
|
||||
self.cluster_identifier = cluster_identifier
|
||||
self.create_time = iso_8601_datetime_with_milliseconds(datetime.datetime.now())
|
||||
self.create_time = iso_8601_datetime_with_milliseconds(datetime.datetime.utcnow())
|
||||
self.status = 'available'
|
||||
self.node_type = node_type
|
||||
self.master_username = master_username
|
||||
|
@ -60,6 +60,17 @@ class MissingKey(S3ClientError):
|
||||
)
|
||||
|
||||
|
||||
class ObjectNotInActiveTierError(S3ClientError):
|
||||
code = 403
|
||||
|
||||
def __init__(self, key_name):
|
||||
super(ObjectNotInActiveTierError, self).__init__(
|
||||
"ObjectNotInActiveTierError",
|
||||
"The source object of the COPY operation is not in the active tier and is only stored in Amazon Glacier.",
|
||||
Key=key_name,
|
||||
)
|
||||
|
||||
|
||||
class InvalidPartOrder(S3ClientError):
|
||||
code = 400
|
||||
|
||||
|
@ -28,7 +28,8 @@ MAX_BUCKET_NAME_LENGTH = 63
|
||||
MIN_BUCKET_NAME_LENGTH = 3
|
||||
UPLOAD_ID_BYTES = 43
|
||||
UPLOAD_PART_MIN_SIZE = 5242880
|
||||
STORAGE_CLASS = ["STANDARD", "REDUCED_REDUNDANCY", "STANDARD_IA", "ONEZONE_IA"]
|
||||
STORAGE_CLASS = ["STANDARD", "REDUCED_REDUNDANCY", "STANDARD_IA", "ONEZONE_IA",
|
||||
"INTELLIGENT_TIERING", "GLACIER", "DEEP_ARCHIVE"]
|
||||
DEFAULT_KEY_BUFFER_SIZE = 16 * 1024 * 1024
|
||||
DEFAULT_TEXT_ENCODING = sys.getdefaultencoding()
|
||||
|
||||
|
@ -17,7 +17,7 @@ from moto.s3bucket_path.utils import bucket_name_from_url as bucketpath_bucket_n
|
||||
parse_key_name as bucketpath_parse_key_name, is_delete_keys as bucketpath_is_delete_keys
|
||||
|
||||
from .exceptions import BucketAlreadyExists, S3ClientError, MissingBucket, MissingKey, InvalidPartOrder, MalformedXML, \
|
||||
MalformedACLError, InvalidNotificationARN, InvalidNotificationEvent
|
||||
MalformedACLError, InvalidNotificationARN, InvalidNotificationEvent, ObjectNotInActiveTierError
|
||||
from .models import s3_backend, get_canned_acl, FakeGrantee, FakeGrant, FakeAcl, FakeKey, FakeTagging, FakeTagSet, \
|
||||
FakeTag
|
||||
from .utils import bucket_name_from_url, clean_key_name, metadata_from_headers, parse_region_from_url
|
||||
@ -463,10 +463,13 @@ class ResponseObject(_TemplateEnvironmentMixin, ActionAuthenticatorMixin):
|
||||
else:
|
||||
result_folders, is_truncated, next_continuation_token = self._truncate_result(result_folders, max_keys)
|
||||
|
||||
key_count = len(result_keys) + len(result_folders)
|
||||
|
||||
return template.render(
|
||||
bucket=bucket,
|
||||
prefix=prefix or '',
|
||||
delimiter=delimiter,
|
||||
key_count=key_count,
|
||||
result_keys=result_keys,
|
||||
result_folders=result_folders,
|
||||
fetch_owner=fetch_owner,
|
||||
@ -902,7 +905,11 @@ class ResponseObject(_TemplateEnvironmentMixin, ActionAuthenticatorMixin):
|
||||
src_version_id = parse_qs(src_key_parsed.query).get(
|
||||
'versionId', [None])[0]
|
||||
|
||||
if self.backend.get_key(src_bucket, src_key, version_id=src_version_id):
|
||||
key = self.backend.get_key(src_bucket, src_key, version_id=src_version_id)
|
||||
|
||||
if key is not None:
|
||||
if key.storage_class in ["GLACIER", "DEEP_ARCHIVE"]:
|
||||
raise ObjectNotInActiveTierError(key)
|
||||
self.backend.copy_key(src_bucket, src_key, bucket_name, key_name,
|
||||
storage=storage_class, acl=acl, src_version_id=src_version_id)
|
||||
else:
|
||||
@ -1326,7 +1333,7 @@ S3_BUCKET_GET_RESPONSE_V2 = """<?xml version="1.0" encoding="UTF-8"?>
|
||||
<Name>{{ bucket.name }}</Name>
|
||||
<Prefix>{{ prefix }}</Prefix>
|
||||
<MaxKeys>{{ max_keys }}</MaxKeys>
|
||||
<KeyCount>{{ result_keys | length }}</KeyCount>
|
||||
<KeyCount>{{ key_count }}</KeyCount>
|
||||
{% if delimiter %}
|
||||
<Delimiter>{{ delimiter }}</Delimiter>
|
||||
{% endif %}
|
||||
|
@ -119,7 +119,7 @@ class Subscription(BaseModel):
|
||||
else:
|
||||
assert False
|
||||
|
||||
lambda_backends[region].send_message(function_name, message, subject=subject, qualifier=qualifier)
|
||||
lambda_backends[region].send_sns_message(function_name, message, subject=subject, qualifier=qualifier)
|
||||
|
||||
def _matches_filter_policy(self, message_attributes):
|
||||
# TODO: support Anything-but matching, prefix matching and
|
||||
|
@ -189,6 +189,8 @@ class Queue(BaseModel):
|
||||
self.name)
|
||||
self.dead_letter_queue = None
|
||||
|
||||
self.lambda_event_source_mappings = {}
|
||||
|
||||
# default settings for a non fifo queue
|
||||
defaults = {
|
||||
'ContentBasedDeduplication': 'false',
|
||||
@ -360,6 +362,33 @@ class Queue(BaseModel):
|
||||
|
||||
def add_message(self, message):
|
||||
self._messages.append(message)
|
||||
from moto.awslambda import lambda_backends
|
||||
for arn, esm in self.lambda_event_source_mappings.items():
|
||||
backend = sqs_backends[self.region]
|
||||
|
||||
"""
|
||||
Lambda polls the queue and invokes your function synchronously with an event
|
||||
that contains queue messages. Lambda reads messages in batches and invokes
|
||||
your function once for each batch. When your function successfully processes
|
||||
a batch, Lambda deletes its messages from the queue.
|
||||
"""
|
||||
messages = backend.receive_messages(
|
||||
self.name,
|
||||
esm.batch_size,
|
||||
self.receive_message_wait_time_seconds,
|
||||
self.visibility_timeout,
|
||||
)
|
||||
|
||||
result = lambda_backends[self.region].send_sqs_batch(
|
||||
arn,
|
||||
messages,
|
||||
self.queue_arn,
|
||||
)
|
||||
|
||||
if result:
|
||||
[backend.delete_message(self.name, m.receipt_handle) for m in messages]
|
||||
else:
|
||||
[backend.change_message_visibility(self.name, m.receipt_handle, 0) for m in messages]
|
||||
|
||||
def get_cfn_attribute(self, attribute_name):
|
||||
from moto.cloudformation.exceptions import UnformattedGetAttTemplateException
|
||||
|
@ -2,7 +2,8 @@ from __future__ import unicode_literals
|
||||
import datetime
|
||||
from moto.core import BaseBackend, BaseModel
|
||||
from moto.core.utils import iso_8601_datetime_with_milliseconds
|
||||
from moto.sts.utils import random_access_key_id, random_secret_access_key, random_session_token
|
||||
from moto.iam.models import ACCOUNT_ID
|
||||
from moto.sts.utils import random_access_key_id, random_secret_access_key, random_session_token, random_assumed_role_id
|
||||
|
||||
|
||||
class Token(BaseModel):
|
||||
@ -22,7 +23,7 @@ class AssumedRole(BaseModel):
|
||||
|
||||
def __init__(self, role_session_name, role_arn, policy, duration, external_id):
|
||||
self.session_name = role_session_name
|
||||
self.arn = role_arn
|
||||
self.role_arn = role_arn
|
||||
self.policy = policy
|
||||
now = datetime.datetime.utcnow()
|
||||
self.expiration = now + datetime.timedelta(seconds=duration)
|
||||
@ -30,11 +31,24 @@ class AssumedRole(BaseModel):
|
||||
self.access_key_id = "ASIA" + random_access_key_id()
|
||||
self.secret_access_key = random_secret_access_key()
|
||||
self.session_token = random_session_token()
|
||||
self.assumed_role_id = "AROA" + random_assumed_role_id()
|
||||
|
||||
@property
|
||||
def expiration_ISO8601(self):
|
||||
return iso_8601_datetime_with_milliseconds(self.expiration)
|
||||
|
||||
@property
|
||||
def user_id(self):
|
||||
return self.assumed_role_id + ":" + self.session_name
|
||||
|
||||
@property
|
||||
def arn(self):
|
||||
return "arn:aws:sts::{account_id}:assumed-role/{role_name}/{session_name}".format(
|
||||
account_id=ACCOUNT_ID,
|
||||
role_name=self.role_arn.split("/")[-1],
|
||||
session_name=self.session_name
|
||||
)
|
||||
|
||||
|
||||
class STSBackend(BaseBackend):
|
||||
|
||||
@ -54,6 +68,12 @@ class STSBackend(BaseBackend):
|
||||
self.assumed_roles.append(role)
|
||||
return role
|
||||
|
||||
def get_assumed_role_from_access_key(self, access_key_id):
|
||||
for assumed_role in self.assumed_roles:
|
||||
if assumed_role.access_key_id == access_key_id:
|
||||
return assumed_role
|
||||
return None
|
||||
|
||||
def assume_role_with_web_identity(self, **kwargs):
|
||||
return self.assume_role(**kwargs)
|
||||
|
||||
|
@ -1,6 +1,8 @@
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from moto.core.responses import BaseResponse
|
||||
from moto.iam.models import ACCOUNT_ID
|
||||
from moto.iam import iam_backend
|
||||
from .exceptions import STSValidationError
|
||||
from .models import sts_backend
|
||||
|
||||
@ -31,7 +33,7 @@ class TokenResponse(BaseResponse):
|
||||
token = sts_backend.get_federation_token(
|
||||
duration=duration, name=name, policy=policy)
|
||||
template = self.response_template(GET_FEDERATION_TOKEN_RESPONSE)
|
||||
return template.render(token=token)
|
||||
return template.render(token=token, account_id=ACCOUNT_ID)
|
||||
|
||||
def assume_role(self):
|
||||
role_session_name = self.querystring.get('RoleSessionName')[0]
|
||||
@ -71,7 +73,23 @@ class TokenResponse(BaseResponse):
|
||||
|
||||
def get_caller_identity(self):
|
||||
template = self.response_template(GET_CALLER_IDENTITY_RESPONSE)
|
||||
return template.render()
|
||||
|
||||
# Default values in case the request does not use valid credentials generated by moto
|
||||
user_id = "AKIAIOSFODNN7EXAMPLE"
|
||||
arn = "arn:aws:sts::{account_id}:user/moto".format(account_id=ACCOUNT_ID)
|
||||
|
||||
access_key_id = self.get_current_user()
|
||||
assumed_role = sts_backend.get_assumed_role_from_access_key(access_key_id)
|
||||
if assumed_role:
|
||||
user_id = assumed_role.user_id
|
||||
arn = assumed_role.arn
|
||||
|
||||
user = iam_backend.get_user_from_access_key_id(access_key_id)
|
||||
if user:
|
||||
user_id = user.id
|
||||
arn = user.arn
|
||||
|
||||
return template.render(account_id=ACCOUNT_ID, user_id=user_id, arn=arn)
|
||||
|
||||
|
||||
GET_SESSION_TOKEN_RESPONSE = """<GetSessionTokenResponse xmlns="https://sts.amazonaws.com/doc/2011-06-15/">
|
||||
@ -99,8 +117,8 @@ GET_FEDERATION_TOKEN_RESPONSE = """<GetFederationTokenResponse xmlns="https://st
|
||||
<AccessKeyId>AKIAIOSFODNN7EXAMPLE</AccessKeyId>
|
||||
</Credentials>
|
||||
<FederatedUser>
|
||||
<Arn>arn:aws:sts::123456789012:federated-user/{{ token.name }}</Arn>
|
||||
<FederatedUserId>123456789012:{{ token.name }}</FederatedUserId>
|
||||
<Arn>arn:aws:sts::{{ account_id }}:federated-user/{{ token.name }}</Arn>
|
||||
<FederatedUserId>{{ account_id }}:{{ token.name }}</FederatedUserId>
|
||||
</FederatedUser>
|
||||
<PackedPolicySize>6</PackedPolicySize>
|
||||
</GetFederationTokenResult>
|
||||
@ -121,7 +139,7 @@ ASSUME_ROLE_RESPONSE = """<AssumeRoleResponse xmlns="https://sts.amazonaws.com/d
|
||||
</Credentials>
|
||||
<AssumedRoleUser>
|
||||
<Arn>{{ role.arn }}</Arn>
|
||||
<AssumedRoleId>ARO123EXAMPLE123:{{ role.session_name }}</AssumedRoleId>
|
||||
<AssumedRoleId>{{ role.user_id }}</AssumedRoleId>
|
||||
</AssumedRoleUser>
|
||||
<PackedPolicySize>6</PackedPolicySize>
|
||||
</AssumeRoleResult>
|
||||
@ -153,9 +171,9 @@ ASSUME_ROLE_WITH_WEB_IDENTITY_RESPONSE = """<AssumeRoleWithWebIdentityResponse x
|
||||
|
||||
GET_CALLER_IDENTITY_RESPONSE = """<GetCallerIdentityResponse xmlns="https://sts.amazonaws.com/doc/2011-06-15/">
|
||||
<GetCallerIdentityResult>
|
||||
<Arn>arn:aws:sts::123456789012:user/moto</Arn>
|
||||
<UserId>AKIAIOSFODNN7EXAMPLE</UserId>
|
||||
<Account>123456789012</Account>
|
||||
<Arn>{{ arn }}</Arn>
|
||||
<UserId>{{ user_id }}</UserId>
|
||||
<Account>{{ account_id }}</Account>
|
||||
</GetCallerIdentityResult>
|
||||
<ResponseMetadata>
|
||||
<RequestId>c6104cbe-af31-11e0-8154-cbc7ccf896c7</RequestId>
|
||||
|
@ -6,15 +6,12 @@ import string
|
||||
import six
|
||||
|
||||
ACCOUNT_SPECIFIC_ACCESS_KEY_PREFIX = "8NWMTLYQ"
|
||||
ACCOUNT_SPECIFIC_ASSUMED_ROLE_ID_PREFIX = "3X42LBCD"
|
||||
SESSION_TOKEN_PREFIX = "FQoGZXIvYXdzEBYaD"
|
||||
|
||||
|
||||
def random_access_key_id():
|
||||
return ACCOUNT_SPECIFIC_ACCESS_KEY_PREFIX + ''.join(six.text_type(
|
||||
random.choice(
|
||||
string.ascii_uppercase + string.digits
|
||||
)) for _ in range(8)
|
||||
)
|
||||
return ACCOUNT_SPECIFIC_ACCESS_KEY_PREFIX + _random_uppercase_or_digit_sequence(8)
|
||||
|
||||
|
||||
def random_secret_access_key():
|
||||
@ -23,3 +20,16 @@ def random_secret_access_key():
|
||||
|
||||
def random_session_token():
|
||||
return SESSION_TOKEN_PREFIX + base64.b64encode(os.urandom(266))[len(SESSION_TOKEN_PREFIX):].decode()
|
||||
|
||||
|
||||
def random_assumed_role_id():
|
||||
return ACCOUNT_SPECIFIC_ASSUMED_ROLE_ID_PREFIX + _random_uppercase_or_digit_sequence(9)
|
||||
|
||||
|
||||
def _random_uppercase_or_digit_sequence(length):
|
||||
return ''.join(
|
||||
six.text_type(
|
||||
random.choice(
|
||||
string.ascii_uppercase + string.digits
|
||||
)) for _ in range(length)
|
||||
)
|
||||
|
@ -61,7 +61,8 @@ def print_implementation_coverage(coverage):
|
||||
percentage_implemented = 0
|
||||
|
||||
print("")
|
||||
print("## {} - {}% implemented".format(service_name, percentage_implemented))
|
||||
print("## {}\n".format(service_name))
|
||||
print("{}% implemented\n".format(percentage_implemented))
|
||||
for op in operations:
|
||||
if op in implemented:
|
||||
print("- [X] {}".format(op))
|
||||
@ -93,7 +94,8 @@ def write_implementation_coverage_to_file(coverage):
|
||||
percentage_implemented = 0
|
||||
|
||||
file.write("\n")
|
||||
file.write("## {} - {}% implemented\n".format(service_name, percentage_implemented))
|
||||
file.write("## {}\n".format(service_name))
|
||||
file.write("{}% implemented\n".format(percentage_implemented))
|
||||
for op in operations:
|
||||
if op in implemented:
|
||||
file.write("- [X] {}\n".format(op))
|
||||
|
4
setup.py
4
setup.py
@ -30,8 +30,8 @@ def get_version():
|
||||
install_requires = [
|
||||
"Jinja2>=2.10.1",
|
||||
"boto>=2.36.0",
|
||||
"boto3>=1.9.86",
|
||||
"botocore>=1.12.86",
|
||||
"boto3>=1.9.201",
|
||||
"botocore>=1.12.201",
|
||||
"cryptography>=2.3.0",
|
||||
"requests>=2.5",
|
||||
"xmltodict",
|
||||
|
@ -74,6 +74,31 @@ def test_list_certificates():
|
||||
resp['CertificateSummaryList'][0]['DomainName'].should.equal(SERVER_COMMON_NAME)
|
||||
|
||||
|
||||
@mock_acm
|
||||
def test_list_certificates_by_status():
|
||||
client = boto3.client('acm', region_name='eu-central-1')
|
||||
issued_arn = _import_cert(client)
|
||||
pending_arn = client.request_certificate(DomainName='google.com')['CertificateArn']
|
||||
|
||||
resp = client.list_certificates()
|
||||
len(resp['CertificateSummaryList']).should.equal(2)
|
||||
resp = client.list_certificates(CertificateStatuses=['EXPIRED', 'INACTIVE'])
|
||||
len(resp['CertificateSummaryList']).should.equal(0)
|
||||
resp = client.list_certificates(CertificateStatuses=['PENDING_VALIDATION'])
|
||||
len(resp['CertificateSummaryList']).should.equal(1)
|
||||
resp['CertificateSummaryList'][0]['CertificateArn'].should.equal(pending_arn)
|
||||
|
||||
resp = client.list_certificates(CertificateStatuses=['ISSUED'])
|
||||
len(resp['CertificateSummaryList']).should.equal(1)
|
||||
resp['CertificateSummaryList'][0]['CertificateArn'].should.equal(issued_arn)
|
||||
resp = client.list_certificates(CertificateStatuses=['ISSUED', 'PENDING_VALIDATION'])
|
||||
len(resp['CertificateSummaryList']).should.equal(2)
|
||||
arns = {cert['CertificateArn'] for cert in resp['CertificateSummaryList']}
|
||||
arns.should.contain(issued_arn)
|
||||
arns.should.contain(pending_arn)
|
||||
|
||||
|
||||
|
||||
@mock_acm
|
||||
def test_get_invalid_certificate():
|
||||
client = boto3.client('acm', region_name='eu-central-1')
|
||||
@ -291,6 +316,7 @@ def test_request_certificate():
|
||||
)
|
||||
resp.should.contain('CertificateArn')
|
||||
arn = resp['CertificateArn']
|
||||
arn.should.match(r"arn:aws:acm:eu-central-1:\d{12}:certificate/")
|
||||
|
||||
resp = client.request_certificate(
|
||||
DomainName='google.com',
|
||||
|
@ -988,13 +988,30 @@ def test_api_keys():
|
||||
apikey['name'].should.equal(apikey_name)
|
||||
len(apikey['value']).should.equal(40)
|
||||
|
||||
apikey_name = 'TESTKEY3'
|
||||
payload = {'name': apikey_name }
|
||||
response = client.create_api_key(**payload)
|
||||
apikey_id = response['id']
|
||||
|
||||
patch_operations = [
|
||||
{'op': 'replace', 'path': '/name', 'value': 'TESTKEY3_CHANGE'},
|
||||
{'op': 'replace', 'path': '/customerId', 'value': '12345'},
|
||||
{'op': 'replace', 'path': '/description', 'value': 'APIKEY UPDATE TEST'},
|
||||
{'op': 'replace', 'path': '/enabled', 'value': 'false'},
|
||||
]
|
||||
response = client.update_api_key(apiKey=apikey_id, patchOperations=patch_operations)
|
||||
response['name'].should.equal('TESTKEY3_CHANGE')
|
||||
response['customerId'].should.equal('12345')
|
||||
response['description'].should.equal('APIKEY UPDATE TEST')
|
||||
response['enabled'].should.equal(False)
|
||||
|
||||
response = client.get_api_keys()
|
||||
len(response['items']).should.equal(2)
|
||||
len(response['items']).should.equal(3)
|
||||
|
||||
client.delete_api_key(apiKey=apikey_id)
|
||||
|
||||
response = client.get_api_keys()
|
||||
len(response['items']).should.equal(1)
|
||||
len(response['items']).should.equal(2)
|
||||
|
||||
@mock_apigateway
|
||||
def test_usage_plans():
|
||||
|
@ -1,6 +1,7 @@
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import base64
|
||||
import uuid
|
||||
import botocore.client
|
||||
import boto3
|
||||
import hashlib
|
||||
@ -11,11 +12,12 @@ import zipfile
|
||||
import sure # noqa
|
||||
|
||||
from freezegun import freeze_time
|
||||
from moto import mock_lambda, mock_s3, mock_ec2, mock_sns, mock_logs, settings
|
||||
from moto import mock_lambda, mock_s3, mock_ec2, mock_sns, mock_logs, settings, mock_sqs
|
||||
from nose.tools import assert_raises
|
||||
from botocore.exceptions import ClientError
|
||||
|
||||
_lambda_region = 'us-west-2'
|
||||
boto3.setup_default_session(region_name=_lambda_region)
|
||||
|
||||
|
||||
def _process_lambda(func_str):
|
||||
@ -59,6 +61,13 @@ def lambda_handler(event, context):
|
||||
"""
|
||||
return _process_lambda(pfunc)
|
||||
|
||||
def get_test_zip_file4():
|
||||
pfunc = """
|
||||
def lambda_handler(event, context):
|
||||
raise Exception('I failed!')
|
||||
"""
|
||||
return _process_lambda(pfunc)
|
||||
|
||||
|
||||
@mock_lambda
|
||||
def test_list_functions():
|
||||
@ -933,3 +942,306 @@ def test_list_versions_by_function_for_nonexistent_function():
|
||||
versions = conn.list_versions_by_function(FunctionName='testFunction')
|
||||
|
||||
assert len(versions['Versions']) == 0
|
||||
|
||||
|
||||
@mock_logs
|
||||
@mock_lambda
|
||||
@mock_sqs
|
||||
def test_create_event_source_mapping():
|
||||
sqs = boto3.resource('sqs')
|
||||
queue = sqs.create_queue(QueueName="test-sqs-queue1")
|
||||
|
||||
conn = boto3.client('lambda')
|
||||
func = conn.create_function(
|
||||
FunctionName='testFunction',
|
||||
Runtime='python2.7',
|
||||
Role='test-iam-role',
|
||||
Handler='lambda_function.lambda_handler',
|
||||
Code={
|
||||
'ZipFile': get_test_zip_file3(),
|
||||
},
|
||||
Description='test lambda function',
|
||||
Timeout=3,
|
||||
MemorySize=128,
|
||||
Publish=True,
|
||||
)
|
||||
|
||||
response = conn.create_event_source_mapping(
|
||||
EventSourceArn=queue.attributes['QueueArn'],
|
||||
FunctionName=func['FunctionArn'],
|
||||
)
|
||||
|
||||
assert response['EventSourceArn'] == queue.attributes['QueueArn']
|
||||
assert response['FunctionArn'] == func['FunctionArn']
|
||||
assert response['State'] == 'Enabled'
|
||||
|
||||
|
||||
@mock_logs
|
||||
@mock_lambda
|
||||
@mock_sqs
|
||||
def test_invoke_function_from_sqs():
|
||||
logs_conn = boto3.client("logs")
|
||||
sqs = boto3.resource('sqs')
|
||||
queue = sqs.create_queue(QueueName="test-sqs-queue1")
|
||||
|
||||
conn = boto3.client('lambda')
|
||||
func = conn.create_function(
|
||||
FunctionName='testFunction',
|
||||
Runtime='python2.7',
|
||||
Role='test-iam-role',
|
||||
Handler='lambda_function.lambda_handler',
|
||||
Code={
|
||||
'ZipFile': get_test_zip_file3(),
|
||||
},
|
||||
Description='test lambda function',
|
||||
Timeout=3,
|
||||
MemorySize=128,
|
||||
Publish=True,
|
||||
)
|
||||
|
||||
response = conn.create_event_source_mapping(
|
||||
EventSourceArn=queue.attributes['QueueArn'],
|
||||
FunctionName=func['FunctionArn'],
|
||||
)
|
||||
|
||||
assert response['EventSourceArn'] == queue.attributes['QueueArn']
|
||||
assert response['State'] == 'Enabled'
|
||||
|
||||
sqs_client = boto3.client('sqs')
|
||||
sqs_client.send_message(QueueUrl=queue.url, MessageBody='test')
|
||||
start = time.time()
|
||||
while (time.time() - start) < 30:
|
||||
result = logs_conn.describe_log_streams(logGroupName='/aws/lambda/testFunction')
|
||||
log_streams = result.get('logStreams')
|
||||
if not log_streams:
|
||||
time.sleep(1)
|
||||
continue
|
||||
|
||||
assert len(log_streams) == 1
|
||||
result = logs_conn.get_log_events(logGroupName='/aws/lambda/testFunction', logStreamName=log_streams[0]['logStreamName'])
|
||||
for event in result.get('events'):
|
||||
if event['message'] == 'get_test_zip_file3 success':
|
||||
return
|
||||
time.sleep(1)
|
||||
|
||||
assert False, "Test Failed"
|
||||
|
||||
|
||||
@mock_logs
|
||||
@mock_lambda
|
||||
@mock_sqs
|
||||
def test_invoke_function_from_sqs_exception():
|
||||
logs_conn = boto3.client("logs")
|
||||
sqs = boto3.resource('sqs')
|
||||
queue = sqs.create_queue(QueueName="test-sqs-queue1")
|
||||
|
||||
conn = boto3.client('lambda')
|
||||
func = conn.create_function(
|
||||
FunctionName='testFunction',
|
||||
Runtime='python2.7',
|
||||
Role='test-iam-role',
|
||||
Handler='lambda_function.lambda_handler',
|
||||
Code={
|
||||
'ZipFile': get_test_zip_file4(),
|
||||
},
|
||||
Description='test lambda function',
|
||||
Timeout=3,
|
||||
MemorySize=128,
|
||||
Publish=True,
|
||||
)
|
||||
|
||||
response = conn.create_event_source_mapping(
|
||||
EventSourceArn=queue.attributes['QueueArn'],
|
||||
FunctionName=func['FunctionArn'],
|
||||
)
|
||||
|
||||
assert response['EventSourceArn'] == queue.attributes['QueueArn']
|
||||
assert response['State'] == 'Enabled'
|
||||
|
||||
entries = []
|
||||
for i in range(3):
|
||||
body = {
|
||||
"uuid": str(uuid.uuid4()),
|
||||
"test": "test_{}".format(i),
|
||||
}
|
||||
entry = {
|
||||
'Id': str(i),
|
||||
'MessageBody': json.dumps(body)
|
||||
}
|
||||
entries.append(entry)
|
||||
|
||||
queue.send_messages(Entries=entries)
|
||||
|
||||
start = time.time()
|
||||
while (time.time() - start) < 30:
|
||||
result = logs_conn.describe_log_streams(logGroupName='/aws/lambda/testFunction')
|
||||
log_streams = result.get('logStreams')
|
||||
if not log_streams:
|
||||
time.sleep(1)
|
||||
continue
|
||||
assert len(log_streams) >= 1
|
||||
|
||||
result = logs_conn.get_log_events(logGroupName='/aws/lambda/testFunction', logStreamName=log_streams[0]['logStreamName'])
|
||||
for event in result.get('events'):
|
||||
if 'I failed!' in event['message']:
|
||||
messages = queue.receive_messages(MaxNumberOfMessages=10)
|
||||
# Verify messages are still visible and unprocessed
|
||||
assert len(messages) is 3
|
||||
return
|
||||
time.sleep(1)
|
||||
|
||||
assert False, "Test Failed"
|
||||
|
||||
|
||||
@mock_logs
|
||||
@mock_lambda
|
||||
@mock_sqs
|
||||
def test_list_event_source_mappings():
|
||||
sqs = boto3.resource('sqs')
|
||||
queue = sqs.create_queue(QueueName="test-sqs-queue1")
|
||||
|
||||
conn = boto3.client('lambda')
|
||||
func = conn.create_function(
|
||||
FunctionName='testFunction',
|
||||
Runtime='python2.7',
|
||||
Role='test-iam-role',
|
||||
Handler='lambda_function.lambda_handler',
|
||||
Code={
|
||||
'ZipFile': get_test_zip_file3(),
|
||||
},
|
||||
Description='test lambda function',
|
||||
Timeout=3,
|
||||
MemorySize=128,
|
||||
Publish=True,
|
||||
)
|
||||
response = conn.create_event_source_mapping(
|
||||
EventSourceArn=queue.attributes['QueueArn'],
|
||||
FunctionName=func['FunctionArn'],
|
||||
)
|
||||
mappings = conn.list_event_source_mappings(EventSourceArn='123')
|
||||
assert len(mappings['EventSourceMappings']) == 0
|
||||
|
||||
mappings = conn.list_event_source_mappings(EventSourceArn=queue.attributes['QueueArn'])
|
||||
assert len(mappings['EventSourceMappings']) == 1
|
||||
assert mappings['EventSourceMappings'][0]['UUID'] == response['UUID']
|
||||
assert mappings['EventSourceMappings'][0]['FunctionArn'] == func['FunctionArn']
|
||||
|
||||
|
||||
@mock_lambda
|
||||
@mock_sqs
|
||||
def test_get_event_source_mapping():
|
||||
sqs = boto3.resource('sqs')
|
||||
queue = sqs.create_queue(QueueName="test-sqs-queue1")
|
||||
|
||||
conn = boto3.client('lambda')
|
||||
func = conn.create_function(
|
||||
FunctionName='testFunction',
|
||||
Runtime='python2.7',
|
||||
Role='test-iam-role',
|
||||
Handler='lambda_function.lambda_handler',
|
||||
Code={
|
||||
'ZipFile': get_test_zip_file3(),
|
||||
},
|
||||
Description='test lambda function',
|
||||
Timeout=3,
|
||||
MemorySize=128,
|
||||
Publish=True,
|
||||
)
|
||||
response = conn.create_event_source_mapping(
|
||||
EventSourceArn=queue.attributes['QueueArn'],
|
||||
FunctionName=func['FunctionArn'],
|
||||
)
|
||||
mapping = conn.get_event_source_mapping(UUID=response['UUID'])
|
||||
assert mapping['UUID'] == response['UUID']
|
||||
assert mapping['FunctionArn'] == func['FunctionArn']
|
||||
|
||||
conn.get_event_source_mapping.when.called_with(UUID='1')\
|
||||
.should.throw(botocore.client.ClientError)
|
||||
|
||||
|
||||
@mock_lambda
|
||||
@mock_sqs
|
||||
def test_update_event_source_mapping():
|
||||
sqs = boto3.resource('sqs')
|
||||
queue = sqs.create_queue(QueueName="test-sqs-queue1")
|
||||
|
||||
conn = boto3.client('lambda')
|
||||
func1 = conn.create_function(
|
||||
FunctionName='testFunction',
|
||||
Runtime='python2.7',
|
||||
Role='test-iam-role',
|
||||
Handler='lambda_function.lambda_handler',
|
||||
Code={
|
||||
'ZipFile': get_test_zip_file3(),
|
||||
},
|
||||
Description='test lambda function',
|
||||
Timeout=3,
|
||||
MemorySize=128,
|
||||
Publish=True,
|
||||
)
|
||||
func2 = conn.create_function(
|
||||
FunctionName='testFunction2',
|
||||
Runtime='python2.7',
|
||||
Role='test-iam-role',
|
||||
Handler='lambda_function.lambda_handler',
|
||||
Code={
|
||||
'ZipFile': get_test_zip_file3(),
|
||||
},
|
||||
Description='test lambda function',
|
||||
Timeout=3,
|
||||
MemorySize=128,
|
||||
Publish=True,
|
||||
)
|
||||
response = conn.create_event_source_mapping(
|
||||
EventSourceArn=queue.attributes['QueueArn'],
|
||||
FunctionName=func1['FunctionArn'],
|
||||
)
|
||||
assert response['FunctionArn'] == func1['FunctionArn']
|
||||
assert response['BatchSize'] == 10
|
||||
assert response['State'] == 'Enabled'
|
||||
|
||||
mapping = conn.update_event_source_mapping(
|
||||
UUID=response['UUID'],
|
||||
Enabled=False,
|
||||
BatchSize=15,
|
||||
FunctionName='testFunction2'
|
||||
|
||||
)
|
||||
assert mapping['UUID'] == response['UUID']
|
||||
assert mapping['FunctionArn'] == func2['FunctionArn']
|
||||
assert mapping['State'] == 'Disabled'
|
||||
|
||||
|
||||
@mock_lambda
|
||||
@mock_sqs
|
||||
def test_delete_event_source_mapping():
|
||||
sqs = boto3.resource('sqs')
|
||||
queue = sqs.create_queue(QueueName="test-sqs-queue1")
|
||||
|
||||
conn = boto3.client('lambda')
|
||||
func1 = conn.create_function(
|
||||
FunctionName='testFunction',
|
||||
Runtime='python2.7',
|
||||
Role='test-iam-role',
|
||||
Handler='lambda_function.lambda_handler',
|
||||
Code={
|
||||
'ZipFile': get_test_zip_file3(),
|
||||
},
|
||||
Description='test lambda function',
|
||||
Timeout=3,
|
||||
MemorySize=128,
|
||||
Publish=True,
|
||||
)
|
||||
response = conn.create_event_source_mapping(
|
||||
EventSourceArn=queue.attributes['QueueArn'],
|
||||
FunctionName=func1['FunctionArn'],
|
||||
)
|
||||
assert response['FunctionArn'] == func1['FunctionArn']
|
||||
assert response['BatchSize'] == 10
|
||||
assert response['State'] == 'Enabled'
|
||||
|
||||
response = conn.delete_event_source_mapping(UUID=response['UUID'])
|
||||
|
||||
assert response['State'] == 'Deleting'
|
||||
conn.get_event_source_mapping.when.called_with(UUID=response['UUID'])\
|
||||
.should.throw(botocore.client.ClientError)
|
||||
|
@ -642,6 +642,87 @@ def test_describe_task_definition():
|
||||
len(resp['jobDefinitions']).should.equal(3)
|
||||
|
||||
|
||||
@mock_logs
|
||||
@mock_ec2
|
||||
@mock_ecs
|
||||
@mock_iam
|
||||
@mock_batch
|
||||
def test_submit_job_by_name():
|
||||
ec2_client, iam_client, ecs_client, logs_client, batch_client = _get_clients()
|
||||
vpc_id, subnet_id, sg_id, iam_arn = _setup(ec2_client, iam_client)
|
||||
|
||||
compute_name = 'test_compute_env'
|
||||
resp = batch_client.create_compute_environment(
|
||||
computeEnvironmentName=compute_name,
|
||||
type='UNMANAGED',
|
||||
state='ENABLED',
|
||||
serviceRole=iam_arn
|
||||
)
|
||||
arn = resp['computeEnvironmentArn']
|
||||
|
||||
resp = batch_client.create_job_queue(
|
||||
jobQueueName='test_job_queue',
|
||||
state='ENABLED',
|
||||
priority=123,
|
||||
computeEnvironmentOrder=[
|
||||
{
|
||||
'order': 123,
|
||||
'computeEnvironment': arn
|
||||
},
|
||||
]
|
||||
)
|
||||
queue_arn = resp['jobQueueArn']
|
||||
|
||||
job_definition_name = 'sleep10'
|
||||
|
||||
batch_client.register_job_definition(
|
||||
jobDefinitionName=job_definition_name,
|
||||
type='container',
|
||||
containerProperties={
|
||||
'image': 'busybox',
|
||||
'vcpus': 1,
|
||||
'memory': 128,
|
||||
'command': ['sleep', '10']
|
||||
}
|
||||
)
|
||||
batch_client.register_job_definition(
|
||||
jobDefinitionName=job_definition_name,
|
||||
type='container',
|
||||
containerProperties={
|
||||
'image': 'busybox',
|
||||
'vcpus': 1,
|
||||
'memory': 256,
|
||||
'command': ['sleep', '10']
|
||||
}
|
||||
)
|
||||
resp = batch_client.register_job_definition(
|
||||
jobDefinitionName=job_definition_name,
|
||||
type='container',
|
||||
containerProperties={
|
||||
'image': 'busybox',
|
||||
'vcpus': 1,
|
||||
'memory': 512,
|
||||
'command': ['sleep', '10']
|
||||
}
|
||||
)
|
||||
job_definition_arn = resp['jobDefinitionArn']
|
||||
|
||||
resp = batch_client.submit_job(
|
||||
jobName='test1',
|
||||
jobQueue=queue_arn,
|
||||
jobDefinition=job_definition_name
|
||||
)
|
||||
job_id = resp['jobId']
|
||||
|
||||
resp_jobs = batch_client.describe_jobs(jobs=[job_id])
|
||||
|
||||
# batch_client.terminate_job(jobId=job_id)
|
||||
|
||||
len(resp_jobs['jobs']).should.equal(1)
|
||||
resp_jobs['jobs'][0]['jobId'].should.equal(job_id)
|
||||
resp_jobs['jobs'][0]['jobQueue'].should.equal(queue_arn)
|
||||
resp_jobs['jobs'][0]['jobDefinition'].should.equal(job_definition_arn)
|
||||
|
||||
# SLOW TESTS
|
||||
@expected_failure
|
||||
@mock_logs
|
||||
|
@ -593,9 +593,11 @@ def test_create_stack_lambda_and_dynamodb():
|
||||
}
|
||||
},
|
||||
"func1version": {
|
||||
"Type": "AWS::Lambda::LambdaVersion",
|
||||
"Properties" : {
|
||||
"Version": "v1.2.3"
|
||||
"Type": "AWS::Lambda::Version",
|
||||
"Properties": {
|
||||
"FunctionName": {
|
||||
"Ref": "func1"
|
||||
}
|
||||
}
|
||||
},
|
||||
"tab1": {
|
||||
@ -618,8 +620,10 @@ def test_create_stack_lambda_and_dynamodb():
|
||||
},
|
||||
"func1mapping": {
|
||||
"Type": "AWS::Lambda::EventSourceMapping",
|
||||
"Properties" : {
|
||||
"FunctionName": "v1.2.3",
|
||||
"Properties": {
|
||||
"FunctionName": {
|
||||
"Ref": "func1"
|
||||
},
|
||||
"EventSourceArn": "arn:aws:dynamodb:region:XXXXXX:table/tab1/stream/2000T00:00:00.000",
|
||||
"StartingPosition": "0",
|
||||
"BatchSize": 100,
|
||||
|
@ -123,6 +123,526 @@ def test_put_configuration_recorder():
|
||||
assert "maximum number of configuration recorders: 1 is reached." in ce.exception.response['Error']['Message']
|
||||
|
||||
|
||||
@mock_config
|
||||
def test_put_configuration_aggregator():
|
||||
client = boto3.client('config', region_name='us-west-2')
|
||||
|
||||
# With too many aggregation sources:
|
||||
with assert_raises(ClientError) as ce:
|
||||
client.put_configuration_aggregator(
|
||||
ConfigurationAggregatorName='testing',
|
||||
AccountAggregationSources=[
|
||||
{
|
||||
'AccountIds': [
|
||||
'012345678910',
|
||||
'111111111111',
|
||||
'222222222222'
|
||||
],
|
||||
'AwsRegions': [
|
||||
'us-east-1',
|
||||
'us-west-2'
|
||||
]
|
||||
},
|
||||
{
|
||||
'AccountIds': [
|
||||
'012345678910',
|
||||
'111111111111',
|
||||
'222222222222'
|
||||
],
|
||||
'AwsRegions': [
|
||||
'us-east-1',
|
||||
'us-west-2'
|
||||
]
|
||||
}
|
||||
]
|
||||
)
|
||||
assert 'Member must have length less than or equal to 1' in ce.exception.response['Error']['Message']
|
||||
assert ce.exception.response['Error']['Code'] == 'ValidationException'
|
||||
|
||||
# With an invalid region config (no regions defined):
|
||||
with assert_raises(ClientError) as ce:
|
||||
client.put_configuration_aggregator(
|
||||
ConfigurationAggregatorName='testing',
|
||||
AccountAggregationSources=[
|
||||
{
|
||||
'AccountIds': [
|
||||
'012345678910',
|
||||
'111111111111',
|
||||
'222222222222'
|
||||
],
|
||||
'AllAwsRegions': False
|
||||
}
|
||||
]
|
||||
)
|
||||
assert 'Your request does not specify any regions' in ce.exception.response['Error']['Message']
|
||||
assert ce.exception.response['Error']['Code'] == 'InvalidParameterValueException'
|
||||
|
||||
with assert_raises(ClientError) as ce:
|
||||
client.put_configuration_aggregator(
|
||||
ConfigurationAggregatorName='testing',
|
||||
OrganizationAggregationSource={
|
||||
'RoleArn': 'arn:aws:iam::012345678910:role/SomeRole'
|
||||
}
|
||||
)
|
||||
assert 'Your request does not specify any regions' in ce.exception.response['Error']['Message']
|
||||
assert ce.exception.response['Error']['Code'] == 'InvalidParameterValueException'
|
||||
|
||||
# With both region flags defined:
|
||||
with assert_raises(ClientError) as ce:
|
||||
client.put_configuration_aggregator(
|
||||
ConfigurationAggregatorName='testing',
|
||||
AccountAggregationSources=[
|
||||
{
|
||||
'AccountIds': [
|
||||
'012345678910',
|
||||
'111111111111',
|
||||
'222222222222'
|
||||
],
|
||||
'AwsRegions': [
|
||||
'us-east-1',
|
||||
'us-west-2'
|
||||
],
|
||||
'AllAwsRegions': True
|
||||
}
|
||||
]
|
||||
)
|
||||
assert 'You must choose one of these options' in ce.exception.response['Error']['Message']
|
||||
assert ce.exception.response['Error']['Code'] == 'InvalidParameterValueException'
|
||||
|
||||
with assert_raises(ClientError) as ce:
|
||||
client.put_configuration_aggregator(
|
||||
ConfigurationAggregatorName='testing',
|
||||
OrganizationAggregationSource={
|
||||
'RoleArn': 'arn:aws:iam::012345678910:role/SomeRole',
|
||||
'AwsRegions': [
|
||||
'us-east-1',
|
||||
'us-west-2'
|
||||
],
|
||||
'AllAwsRegions': True
|
||||
}
|
||||
)
|
||||
assert 'You must choose one of these options' in ce.exception.response['Error']['Message']
|
||||
assert ce.exception.response['Error']['Code'] == 'InvalidParameterValueException'
|
||||
|
||||
# Name too long:
|
||||
with assert_raises(ClientError) as ce:
|
||||
client.put_configuration_aggregator(
|
||||
ConfigurationAggregatorName='a' * 257,
|
||||
AccountAggregationSources=[
|
||||
{
|
||||
'AccountIds': [
|
||||
'012345678910',
|
||||
],
|
||||
'AllAwsRegions': True
|
||||
}
|
||||
]
|
||||
)
|
||||
assert 'configurationAggregatorName' in ce.exception.response['Error']['Message']
|
||||
assert ce.exception.response['Error']['Code'] == 'ValidationException'
|
||||
|
||||
# Too many tags (>50):
|
||||
with assert_raises(ClientError) as ce:
|
||||
client.put_configuration_aggregator(
|
||||
ConfigurationAggregatorName='testing',
|
||||
AccountAggregationSources=[
|
||||
{
|
||||
'AccountIds': [
|
||||
'012345678910',
|
||||
],
|
||||
'AllAwsRegions': True
|
||||
}
|
||||
],
|
||||
Tags=[{'Key': '{}'.format(x), 'Value': '{}'.format(x)} for x in range(0, 51)]
|
||||
)
|
||||
assert 'Member must have length less than or equal to 50' in ce.exception.response['Error']['Message']
|
||||
assert ce.exception.response['Error']['Code'] == 'ValidationException'
|
||||
|
||||
# Tag key is too big (>128 chars):
|
||||
with assert_raises(ClientError) as ce:
|
||||
client.put_configuration_aggregator(
|
||||
ConfigurationAggregatorName='testing',
|
||||
AccountAggregationSources=[
|
||||
{
|
||||
'AccountIds': [
|
||||
'012345678910',
|
||||
],
|
||||
'AllAwsRegions': True
|
||||
}
|
||||
],
|
||||
Tags=[{'Key': 'a' * 129, 'Value': 'a'}]
|
||||
)
|
||||
assert 'Member must have length less than or equal to 128' in ce.exception.response['Error']['Message']
|
||||
assert ce.exception.response['Error']['Code'] == 'ValidationException'
|
||||
|
||||
# Tag value is too big (>256 chars):
|
||||
with assert_raises(ClientError) as ce:
|
||||
client.put_configuration_aggregator(
|
||||
ConfigurationAggregatorName='testing',
|
||||
AccountAggregationSources=[
|
||||
{
|
||||
'AccountIds': [
|
||||
'012345678910',
|
||||
],
|
||||
'AllAwsRegions': True
|
||||
}
|
||||
],
|
||||
Tags=[{'Key': 'tag', 'Value': 'a' * 257}]
|
||||
)
|
||||
assert 'Member must have length less than or equal to 256' in ce.exception.response['Error']['Message']
|
||||
assert ce.exception.response['Error']['Code'] == 'ValidationException'
|
||||
|
||||
# Duplicate Tags:
|
||||
with assert_raises(ClientError) as ce:
|
||||
client.put_configuration_aggregator(
|
||||
ConfigurationAggregatorName='testing',
|
||||
AccountAggregationSources=[
|
||||
{
|
||||
'AccountIds': [
|
||||
'012345678910',
|
||||
],
|
||||
'AllAwsRegions': True
|
||||
}
|
||||
],
|
||||
Tags=[{'Key': 'a', 'Value': 'a'}, {'Key': 'a', 'Value': 'a'}]
|
||||
)
|
||||
assert 'Duplicate tag keys found.' in ce.exception.response['Error']['Message']
|
||||
assert ce.exception.response['Error']['Code'] == 'InvalidInput'
|
||||
|
||||
# Invalid characters in the tag key:
|
||||
with assert_raises(ClientError) as ce:
|
||||
client.put_configuration_aggregator(
|
||||
ConfigurationAggregatorName='testing',
|
||||
AccountAggregationSources=[
|
||||
{
|
||||
'AccountIds': [
|
||||
'012345678910',
|
||||
],
|
||||
'AllAwsRegions': True
|
||||
}
|
||||
],
|
||||
Tags=[{'Key': '!', 'Value': 'a'}]
|
||||
)
|
||||
assert 'Member must satisfy regular expression pattern:' in ce.exception.response['Error']['Message']
|
||||
assert ce.exception.response['Error']['Code'] == 'ValidationException'
|
||||
|
||||
# If it contains both the AccountAggregationSources and the OrganizationAggregationSource
|
||||
with assert_raises(ClientError) as ce:
|
||||
client.put_configuration_aggregator(
|
||||
ConfigurationAggregatorName='testing',
|
||||
AccountAggregationSources=[
|
||||
{
|
||||
'AccountIds': [
|
||||
'012345678910',
|
||||
],
|
||||
'AllAwsRegions': False
|
||||
}
|
||||
],
|
||||
OrganizationAggregationSource={
|
||||
'RoleArn': 'arn:aws:iam::012345678910:role/SomeRole',
|
||||
'AllAwsRegions': False
|
||||
}
|
||||
)
|
||||
assert 'AccountAggregationSource and the OrganizationAggregationSource' in ce.exception.response['Error']['Message']
|
||||
assert ce.exception.response['Error']['Code'] == 'InvalidParameterValueException'
|
||||
|
||||
# If it contains neither:
|
||||
with assert_raises(ClientError) as ce:
|
||||
client.put_configuration_aggregator(
|
||||
ConfigurationAggregatorName='testing',
|
||||
)
|
||||
assert 'AccountAggregationSource or the OrganizationAggregationSource' in ce.exception.response['Error']['Message']
|
||||
assert ce.exception.response['Error']['Code'] == 'InvalidParameterValueException'
|
||||
|
||||
# Just make one:
|
||||
account_aggregation_source = {
|
||||
'AccountIds': [
|
||||
'012345678910',
|
||||
'111111111111',
|
||||
'222222222222'
|
||||
],
|
||||
'AwsRegions': [
|
||||
'us-east-1',
|
||||
'us-west-2'
|
||||
],
|
||||
'AllAwsRegions': False
|
||||
}
|
||||
|
||||
result = client.put_configuration_aggregator(
|
||||
ConfigurationAggregatorName='testing',
|
||||
AccountAggregationSources=[account_aggregation_source],
|
||||
)
|
||||
assert result['ConfigurationAggregator']['ConfigurationAggregatorName'] == 'testing'
|
||||
assert result['ConfigurationAggregator']['AccountAggregationSources'] == [account_aggregation_source]
|
||||
assert 'arn:aws:config:us-west-2:123456789012:config-aggregator/config-aggregator-' in \
|
||||
result['ConfigurationAggregator']['ConfigurationAggregatorArn']
|
||||
assert result['ConfigurationAggregator']['CreationTime'] == result['ConfigurationAggregator']['LastUpdatedTime']
|
||||
|
||||
# Update the existing one:
|
||||
original_arn = result['ConfigurationAggregator']['ConfigurationAggregatorArn']
|
||||
account_aggregation_source.pop('AwsRegions')
|
||||
account_aggregation_source['AllAwsRegions'] = True
|
||||
result = client.put_configuration_aggregator(
|
||||
ConfigurationAggregatorName='testing',
|
||||
AccountAggregationSources=[account_aggregation_source]
|
||||
)
|
||||
|
||||
assert result['ConfigurationAggregator']['ConfigurationAggregatorName'] == 'testing'
|
||||
assert result['ConfigurationAggregator']['AccountAggregationSources'] == [account_aggregation_source]
|
||||
assert result['ConfigurationAggregator']['ConfigurationAggregatorArn'] == original_arn
|
||||
|
||||
# Make an org one:
|
||||
result = client.put_configuration_aggregator(
|
||||
ConfigurationAggregatorName='testingOrg',
|
||||
OrganizationAggregationSource={
|
||||
'RoleArn': 'arn:aws:iam::012345678910:role/SomeRole',
|
||||
'AwsRegions': ['us-east-1', 'us-west-2']
|
||||
}
|
||||
)
|
||||
|
||||
assert result['ConfigurationAggregator']['ConfigurationAggregatorName'] == 'testingOrg'
|
||||
assert result['ConfigurationAggregator']['OrganizationAggregationSource'] == {
|
||||
'RoleArn': 'arn:aws:iam::012345678910:role/SomeRole',
|
||||
'AwsRegions': [
|
||||
'us-east-1',
|
||||
'us-west-2'
|
||||
],
|
||||
'AllAwsRegions': False
|
||||
}
|
||||
|
||||
|
||||
@mock_config
|
||||
def test_describe_configuration_aggregators():
|
||||
client = boto3.client('config', region_name='us-west-2')
|
||||
|
||||
# Without any config aggregators:
|
||||
assert not client.describe_configuration_aggregators()['ConfigurationAggregators']
|
||||
|
||||
# Make 10 config aggregators:
|
||||
for x in range(0, 10):
|
||||
client.put_configuration_aggregator(
|
||||
ConfigurationAggregatorName='testing{}'.format(x),
|
||||
AccountAggregationSources=[
|
||||
{
|
||||
'AccountIds': [
|
||||
'012345678910',
|
||||
],
|
||||
'AllAwsRegions': True
|
||||
}
|
||||
]
|
||||
)
|
||||
|
||||
# Describe with an incorrect name:
|
||||
with assert_raises(ClientError) as ce:
|
||||
client.describe_configuration_aggregators(ConfigurationAggregatorNames=['DoesNotExist'])
|
||||
assert 'The configuration aggregator does not exist.' in ce.exception.response['Error']['Message']
|
||||
assert ce.exception.response['Error']['Code'] == 'NoSuchConfigurationAggregatorException'
|
||||
|
||||
# Error describe with more than 1 item in the list:
|
||||
with assert_raises(ClientError) as ce:
|
||||
client.describe_configuration_aggregators(ConfigurationAggregatorNames=['testing0', 'DoesNotExist'])
|
||||
assert 'At least one of the configuration aggregators does not exist.' in ce.exception.response['Error']['Message']
|
||||
assert ce.exception.response['Error']['Code'] == 'NoSuchConfigurationAggregatorException'
|
||||
|
||||
# Get the normal list:
|
||||
result = client.describe_configuration_aggregators()
|
||||
assert not result.get('NextToken')
|
||||
assert len(result['ConfigurationAggregators']) == 10
|
||||
|
||||
# Test filtered list:
|
||||
agg_names = ['testing0', 'testing1', 'testing2']
|
||||
result = client.describe_configuration_aggregators(ConfigurationAggregatorNames=agg_names)
|
||||
assert not result.get('NextToken')
|
||||
assert len(result['ConfigurationAggregators']) == 3
|
||||
assert [agg['ConfigurationAggregatorName'] for agg in result['ConfigurationAggregators']] == agg_names
|
||||
|
||||
# Test Pagination:
|
||||
result = client.describe_configuration_aggregators(Limit=4)
|
||||
assert len(result['ConfigurationAggregators']) == 4
|
||||
assert result['NextToken'] == 'testing4'
|
||||
assert [agg['ConfigurationAggregatorName'] for agg in result['ConfigurationAggregators']] == \
|
||||
['testing{}'.format(x) for x in range(0, 4)]
|
||||
result = client.describe_configuration_aggregators(Limit=4, NextToken='testing4')
|
||||
assert len(result['ConfigurationAggregators']) == 4
|
||||
assert result['NextToken'] == 'testing8'
|
||||
assert [agg['ConfigurationAggregatorName'] for agg in result['ConfigurationAggregators']] == \
|
||||
['testing{}'.format(x) for x in range(4, 8)]
|
||||
result = client.describe_configuration_aggregators(Limit=4, NextToken='testing8')
|
||||
assert len(result['ConfigurationAggregators']) == 2
|
||||
assert not result.get('NextToken')
|
||||
assert [agg['ConfigurationAggregatorName'] for agg in result['ConfigurationAggregators']] == \
|
||||
['testing{}'.format(x) for x in range(8, 10)]
|
||||
|
||||
# Test Pagination with Filtering:
|
||||
result = client.describe_configuration_aggregators(ConfigurationAggregatorNames=['testing2', 'testing4'], Limit=1)
|
||||
assert len(result['ConfigurationAggregators']) == 1
|
||||
assert result['NextToken'] == 'testing4'
|
||||
assert result['ConfigurationAggregators'][0]['ConfigurationAggregatorName'] == 'testing2'
|
||||
result = client.describe_configuration_aggregators(ConfigurationAggregatorNames=['testing2', 'testing4'], Limit=1, NextToken='testing4')
|
||||
assert not result.get('NextToken')
|
||||
assert result['ConfigurationAggregators'][0]['ConfigurationAggregatorName'] == 'testing4'
|
||||
|
||||
# Test with an invalid filter:
|
||||
with assert_raises(ClientError) as ce:
|
||||
client.describe_configuration_aggregators(NextToken='WRONG')
|
||||
assert 'The nextToken provided is invalid' == ce.exception.response['Error']['Message']
|
||||
assert ce.exception.response['Error']['Code'] == 'InvalidNextTokenException'
|
||||
|
||||
|
||||
@mock_config
|
||||
def test_put_aggregation_authorization():
|
||||
client = boto3.client('config', region_name='us-west-2')
|
||||
|
||||
# Too many tags (>50):
|
||||
with assert_raises(ClientError) as ce:
|
||||
client.put_aggregation_authorization(
|
||||
AuthorizedAccountId='012345678910',
|
||||
AuthorizedAwsRegion='us-west-2',
|
||||
Tags=[{'Key': '{}'.format(x), 'Value': '{}'.format(x)} for x in range(0, 51)]
|
||||
)
|
||||
assert 'Member must have length less than or equal to 50' in ce.exception.response['Error']['Message']
|
||||
assert ce.exception.response['Error']['Code'] == 'ValidationException'
|
||||
|
||||
# Tag key is too big (>128 chars):
|
||||
with assert_raises(ClientError) as ce:
|
||||
client.put_aggregation_authorization(
|
||||
AuthorizedAccountId='012345678910',
|
||||
AuthorizedAwsRegion='us-west-2',
|
||||
Tags=[{'Key': 'a' * 129, 'Value': 'a'}]
|
||||
)
|
||||
assert 'Member must have length less than or equal to 128' in ce.exception.response['Error']['Message']
|
||||
assert ce.exception.response['Error']['Code'] == 'ValidationException'
|
||||
|
||||
# Tag value is too big (>256 chars):
|
||||
with assert_raises(ClientError) as ce:
|
||||
client.put_aggregation_authorization(
|
||||
AuthorizedAccountId='012345678910',
|
||||
AuthorizedAwsRegion='us-west-2',
|
||||
Tags=[{'Key': 'tag', 'Value': 'a' * 257}]
|
||||
)
|
||||
assert 'Member must have length less than or equal to 256' in ce.exception.response['Error']['Message']
|
||||
assert ce.exception.response['Error']['Code'] == 'ValidationException'
|
||||
|
||||
# Duplicate Tags:
|
||||
with assert_raises(ClientError) as ce:
|
||||
client.put_aggregation_authorization(
|
||||
AuthorizedAccountId='012345678910',
|
||||
AuthorizedAwsRegion='us-west-2',
|
||||
Tags=[{'Key': 'a', 'Value': 'a'}, {'Key': 'a', 'Value': 'a'}]
|
||||
)
|
||||
assert 'Duplicate tag keys found.' in ce.exception.response['Error']['Message']
|
||||
assert ce.exception.response['Error']['Code'] == 'InvalidInput'
|
||||
|
||||
# Invalid characters in the tag key:
|
||||
with assert_raises(ClientError) as ce:
|
||||
client.put_aggregation_authorization(
|
||||
AuthorizedAccountId='012345678910',
|
||||
AuthorizedAwsRegion='us-west-2',
|
||||
Tags=[{'Key': '!', 'Value': 'a'}]
|
||||
)
|
||||
assert 'Member must satisfy regular expression pattern:' in ce.exception.response['Error']['Message']
|
||||
assert ce.exception.response['Error']['Code'] == 'ValidationException'
|
||||
|
||||
# Put a normal one there:
|
||||
result = client.put_aggregation_authorization(AuthorizedAccountId='012345678910', AuthorizedAwsRegion='us-east-1',
|
||||
Tags=[{'Key': 'tag', 'Value': 'a'}])
|
||||
|
||||
assert result['AggregationAuthorization']['AggregationAuthorizationArn'] == 'arn:aws:config:us-west-2:123456789012:' \
|
||||
'aggregation-authorization/012345678910/us-east-1'
|
||||
assert result['AggregationAuthorization']['AuthorizedAccountId'] == '012345678910'
|
||||
assert result['AggregationAuthorization']['AuthorizedAwsRegion'] == 'us-east-1'
|
||||
assert isinstance(result['AggregationAuthorization']['CreationTime'], datetime)
|
||||
|
||||
creation_date = result['AggregationAuthorization']['CreationTime']
|
||||
|
||||
# And again:
|
||||
result = client.put_aggregation_authorization(AuthorizedAccountId='012345678910', AuthorizedAwsRegion='us-east-1')
|
||||
assert result['AggregationAuthorization']['AggregationAuthorizationArn'] == 'arn:aws:config:us-west-2:123456789012:' \
|
||||
'aggregation-authorization/012345678910/us-east-1'
|
||||
assert result['AggregationAuthorization']['AuthorizedAccountId'] == '012345678910'
|
||||
assert result['AggregationAuthorization']['AuthorizedAwsRegion'] == 'us-east-1'
|
||||
assert result['AggregationAuthorization']['CreationTime'] == creation_date
|
||||
|
||||
|
||||
@mock_config
|
||||
def test_describe_aggregation_authorizations():
|
||||
client = boto3.client('config', region_name='us-west-2')
|
||||
|
||||
# With no aggregation authorizations:
|
||||
assert not client.describe_aggregation_authorizations()['AggregationAuthorizations']
|
||||
|
||||
# Make 10 account authorizations:
|
||||
for i in range(0, 10):
|
||||
client.put_aggregation_authorization(AuthorizedAccountId='{}'.format(str(i) * 12), AuthorizedAwsRegion='us-west-2')
|
||||
|
||||
result = client.describe_aggregation_authorizations()
|
||||
assert len(result['AggregationAuthorizations']) == 10
|
||||
assert not result.get('NextToken')
|
||||
for i in range(0, 10):
|
||||
assert result['AggregationAuthorizations'][i]['AuthorizedAccountId'] == str(i) * 12
|
||||
|
||||
# Test Pagination:
|
||||
result = client.describe_aggregation_authorizations(Limit=4)
|
||||
assert len(result['AggregationAuthorizations']) == 4
|
||||
assert result['NextToken'] == ('4' * 12) + '/us-west-2'
|
||||
assert [auth['AuthorizedAccountId'] for auth in result['AggregationAuthorizations']] == ['{}'.format(str(x) * 12) for x in range(0, 4)]
|
||||
|
||||
result = client.describe_aggregation_authorizations(Limit=4, NextToken=('4' * 12) + '/us-west-2')
|
||||
assert len(result['AggregationAuthorizations']) == 4
|
||||
assert result['NextToken'] == ('8' * 12) + '/us-west-2'
|
||||
assert [auth['AuthorizedAccountId'] for auth in result['AggregationAuthorizations']] == ['{}'.format(str(x) * 12) for x in range(4, 8)]
|
||||
|
||||
result = client.describe_aggregation_authorizations(Limit=4, NextToken=('8' * 12) + '/us-west-2')
|
||||
assert len(result['AggregationAuthorizations']) == 2
|
||||
assert not result.get('NextToken')
|
||||
assert [auth['AuthorizedAccountId'] for auth in result['AggregationAuthorizations']] == ['{}'.format(str(x) * 12) for x in range(8, 10)]
|
||||
|
||||
# Test with an invalid filter:
|
||||
with assert_raises(ClientError) as ce:
|
||||
client.describe_aggregation_authorizations(NextToken='WRONG')
|
||||
assert 'The nextToken provided is invalid' == ce.exception.response['Error']['Message']
|
||||
assert ce.exception.response['Error']['Code'] == 'InvalidNextTokenException'
|
||||
|
||||
|
||||
@mock_config
|
||||
def test_delete_aggregation_authorization():
|
||||
client = boto3.client('config', region_name='us-west-2')
|
||||
|
||||
client.put_aggregation_authorization(AuthorizedAccountId='012345678910', AuthorizedAwsRegion='us-west-2')
|
||||
|
||||
# Delete it:
|
||||
client.delete_aggregation_authorization(AuthorizedAccountId='012345678910', AuthorizedAwsRegion='us-west-2')
|
||||
|
||||
# Verify that none are there:
|
||||
assert not client.describe_aggregation_authorizations()['AggregationAuthorizations']
|
||||
|
||||
# Try it again -- nothing should happen:
|
||||
client.delete_aggregation_authorization(AuthorizedAccountId='012345678910', AuthorizedAwsRegion='us-west-2')
|
||||
|
||||
|
||||
@mock_config
|
||||
def test_delete_configuration_aggregator():
|
||||
client = boto3.client('config', region_name='us-west-2')
|
||||
client.put_configuration_aggregator(
|
||||
ConfigurationAggregatorName='testing',
|
||||
AccountAggregationSources=[
|
||||
{
|
||||
'AccountIds': [
|
||||
'012345678910',
|
||||
],
|
||||
'AllAwsRegions': True
|
||||
}
|
||||
]
|
||||
)
|
||||
|
||||
client.delete_configuration_aggregator(ConfigurationAggregatorName='testing')
|
||||
|
||||
# And again to confirm that it's deleted:
|
||||
with assert_raises(ClientError) as ce:
|
||||
client.delete_configuration_aggregator(ConfigurationAggregatorName='testing')
|
||||
assert 'The configuration aggregator does not exist.' in ce.exception.response['Error']['Message']
|
||||
assert ce.exception.response['Error']['Code'] == 'NoSuchConfigurationAggregatorException'
|
||||
|
||||
|
||||
@mock_config
|
||||
def test_describe_configurations():
|
||||
client = boto3.client('config', region_name='us-west-2')
|
||||
|
@ -273,6 +273,27 @@ def test_access_denied_with_denying_policy():
|
||||
)
|
||||
|
||||
|
||||
@set_initial_no_auth_action_count(3)
|
||||
@mock_sts
|
||||
def test_get_caller_identity_allowed_with_denying_policy():
|
||||
user_name = 'test-user'
|
||||
inline_policy_document = {
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Deny",
|
||||
"Action": "sts:GetCallerIdentity",
|
||||
"Resource": "*"
|
||||
}
|
||||
]
|
||||
}
|
||||
access_key = create_user_with_access_key_and_inline_policy(user_name, inline_policy_document)
|
||||
client = boto3.client('sts', region_name='us-east-1',
|
||||
aws_access_key_id=access_key['AccessKeyId'],
|
||||
aws_secret_access_key=access_key['SecretAccessKey'])
|
||||
client.get_caller_identity().should.be.a(dict)
|
||||
|
||||
|
||||
@set_initial_no_auth_action_count(3)
|
||||
@mock_ec2
|
||||
def test_allowed_with_wildcard_action():
|
||||
|
@ -2141,3 +2141,55 @@ def test_scan_by_non_exists_index():
|
||||
ex.exception.response['Error']['Message'].should.equal(
|
||||
'The table does not have the specified index: non_exists_index'
|
||||
)
|
||||
|
||||
|
||||
@mock_dynamodb2
|
||||
def test_batch_items_returns_all():
|
||||
dynamodb = _create_user_table()
|
||||
returned_items = dynamodb.batch_get_item(RequestItems={
|
||||
'users': {
|
||||
'Keys': [{
|
||||
'username': {'S': 'user0'}
|
||||
}, {
|
||||
'username': {'S': 'user1'}
|
||||
}, {
|
||||
'username': {'S': 'user2'}
|
||||
}, {
|
||||
'username': {'S': 'user3'}
|
||||
}],
|
||||
'ConsistentRead': True
|
||||
}
|
||||
})['Responses']['users']
|
||||
assert len(returned_items) == 3
|
||||
assert [item['username']['S'] for item in returned_items] == ['user1', 'user2', 'user3']
|
||||
|
||||
|
||||
@mock_dynamodb2
|
||||
def test_batch_items_should_throw_exception_for_duplicate_request():
|
||||
client = _create_user_table()
|
||||
with assert_raises(ClientError) as ex:
|
||||
client.batch_get_item(RequestItems={
|
||||
'users': {
|
||||
'Keys': [{
|
||||
'username': {'S': 'user0'}
|
||||
}, {
|
||||
'username': {'S': 'user0'}
|
||||
}],
|
||||
'ConsistentRead': True
|
||||
}})
|
||||
ex.exception.response['Error']['Code'].should.equal('ValidationException')
|
||||
ex.exception.response['Error']['Message'].should.equal('Provided list of item keys contains duplicates')
|
||||
|
||||
|
||||
def _create_user_table():
|
||||
client = boto3.client('dynamodb', region_name='us-east-1')
|
||||
client.create_table(
|
||||
TableName='users',
|
||||
KeySchema=[{'AttributeName': 'username', 'KeyType': 'HASH'}],
|
||||
AttributeDefinitions=[{'AttributeName': 'username', 'AttributeType': 'S'}],
|
||||
ProvisionedThroughput={'ReadCapacityUnits': 5, 'WriteCapacityUnits': 5}
|
||||
)
|
||||
client.put_item(TableName='users', Item={'username': {'S': 'user1'}, 'foo': {'S': 'bar'}})
|
||||
client.put_item(TableName='users', Item={'username': {'S': 'user2'}, 'foo': {'S': 'bar'}})
|
||||
client.put_item(TableName='users', Item={'username': {'S': 'user3'}, 'foo': {'S': 'bar'}})
|
||||
return client
|
||||
|
415
tests/test_ec2/test_launch_templates.py
Normal file
415
tests/test_ec2/test_launch_templates.py
Normal file
@ -0,0 +1,415 @@
|
||||
import boto3
|
||||
import sure # noqa
|
||||
|
||||
from nose.tools import assert_raises
|
||||
from botocore.client import ClientError
|
||||
|
||||
from moto import mock_ec2
|
||||
|
||||
|
||||
@mock_ec2
|
||||
def test_launch_template_create():
|
||||
cli = boto3.client("ec2", region_name="us-east-1")
|
||||
|
||||
resp = cli.create_launch_template(
|
||||
LaunchTemplateName="test-template",
|
||||
|
||||
# the absolute minimum needed to create a template without other resources
|
||||
LaunchTemplateData={
|
||||
"TagSpecifications": [{
|
||||
"ResourceType": "instance",
|
||||
"Tags": [{
|
||||
"Key": "test",
|
||||
"Value": "value",
|
||||
}],
|
||||
}],
|
||||
},
|
||||
)
|
||||
|
||||
resp.should.have.key("LaunchTemplate")
|
||||
lt = resp["LaunchTemplate"]
|
||||
lt["LaunchTemplateName"].should.equal("test-template")
|
||||
lt["DefaultVersionNumber"].should.equal(1)
|
||||
lt["LatestVersionNumber"].should.equal(1)
|
||||
|
||||
with assert_raises(ClientError) as ex:
|
||||
cli.create_launch_template(
|
||||
LaunchTemplateName="test-template",
|
||||
LaunchTemplateData={
|
||||
"TagSpecifications": [{
|
||||
"ResourceType": "instance",
|
||||
"Tags": [{
|
||||
"Key": "test",
|
||||
"Value": "value",
|
||||
}],
|
||||
}],
|
||||
},
|
||||
)
|
||||
|
||||
str(ex.exception).should.equal(
|
||||
'An error occurred (InvalidLaunchTemplateName.AlreadyExistsException) when calling the CreateLaunchTemplate operation: Launch template name already in use.')
|
||||
|
||||
|
||||
@mock_ec2
|
||||
def test_describe_launch_template_versions():
|
||||
template_data = {
|
||||
"ImageId": "ami-abc123",
|
||||
"DisableApiTermination": False,
|
||||
"TagSpecifications": [{
|
||||
"ResourceType": "instance",
|
||||
"Tags": [{
|
||||
"Key": "test",
|
||||
"Value": "value",
|
||||
}],
|
||||
}],
|
||||
"SecurityGroupIds": [
|
||||
"sg-1234",
|
||||
"sg-ab5678",
|
||||
],
|
||||
}
|
||||
|
||||
cli = boto3.client("ec2", region_name="us-east-1")
|
||||
|
||||
create_resp = cli.create_launch_template(
|
||||
LaunchTemplateName="test-template",
|
||||
LaunchTemplateData=template_data)
|
||||
|
||||
# test using name
|
||||
resp = cli.describe_launch_template_versions(
|
||||
LaunchTemplateName="test-template",
|
||||
Versions=['1'])
|
||||
|
||||
templ = resp["LaunchTemplateVersions"][0]["LaunchTemplateData"]
|
||||
templ.should.equal(template_data)
|
||||
|
||||
# test using id
|
||||
resp = cli.describe_launch_template_versions(
|
||||
LaunchTemplateId=create_resp["LaunchTemplate"]["LaunchTemplateId"],
|
||||
Versions=['1'])
|
||||
|
||||
templ = resp["LaunchTemplateVersions"][0]["LaunchTemplateData"]
|
||||
templ.should.equal(template_data)
|
||||
|
||||
|
||||
@mock_ec2
|
||||
def test_create_launch_template_version():
|
||||
cli = boto3.client("ec2", region_name="us-east-1")
|
||||
|
||||
create_resp = cli.create_launch_template(
|
||||
LaunchTemplateName="test-template",
|
||||
LaunchTemplateData={
|
||||
"ImageId": "ami-abc123"
|
||||
})
|
||||
|
||||
version_resp = cli.create_launch_template_version(
|
||||
LaunchTemplateName="test-template",
|
||||
LaunchTemplateData={
|
||||
"ImageId": "ami-def456"
|
||||
},
|
||||
VersionDescription="new ami")
|
||||
|
||||
version_resp.should.have.key("LaunchTemplateVersion")
|
||||
version = version_resp["LaunchTemplateVersion"]
|
||||
version["DefaultVersion"].should.equal(False)
|
||||
version["LaunchTemplateId"].should.equal(create_resp["LaunchTemplate"]["LaunchTemplateId"])
|
||||
version["VersionDescription"].should.equal("new ami")
|
||||
version["VersionNumber"].should.equal(2)
|
||||
|
||||
|
||||
@mock_ec2
|
||||
def test_create_launch_template_version_by_id():
|
||||
cli = boto3.client("ec2", region_name="us-east-1")
|
||||
|
||||
create_resp = cli.create_launch_template(
|
||||
LaunchTemplateName="test-template",
|
||||
LaunchTemplateData={
|
||||
"ImageId": "ami-abc123"
|
||||
})
|
||||
|
||||
version_resp = cli.create_launch_template_version(
|
||||
LaunchTemplateId=create_resp["LaunchTemplate"]["LaunchTemplateId"],
|
||||
LaunchTemplateData={
|
||||
"ImageId": "ami-def456"
|
||||
},
|
||||
VersionDescription="new ami")
|
||||
|
||||
version_resp.should.have.key("LaunchTemplateVersion")
|
||||
version = version_resp["LaunchTemplateVersion"]
|
||||
version["DefaultVersion"].should.equal(False)
|
||||
version["LaunchTemplateId"].should.equal(create_resp["LaunchTemplate"]["LaunchTemplateId"])
|
||||
version["VersionDescription"].should.equal("new ami")
|
||||
version["VersionNumber"].should.equal(2)
|
||||
|
||||
|
||||
@mock_ec2
|
||||
def test_describe_launch_template_versions_with_multiple_versions():
|
||||
cli = boto3.client("ec2", region_name="us-east-1")
|
||||
|
||||
cli.create_launch_template(
|
||||
LaunchTemplateName="test-template",
|
||||
LaunchTemplateData={
|
||||
"ImageId": "ami-abc123"
|
||||
})
|
||||
|
||||
cli.create_launch_template_version(
|
||||
LaunchTemplateName="test-template",
|
||||
LaunchTemplateData={
|
||||
"ImageId": "ami-def456"
|
||||
},
|
||||
VersionDescription="new ami")
|
||||
|
||||
resp = cli.describe_launch_template_versions(
|
||||
LaunchTemplateName="test-template")
|
||||
|
||||
resp["LaunchTemplateVersions"].should.have.length_of(2)
|
||||
resp["LaunchTemplateVersions"][0]["LaunchTemplateData"]["ImageId"].should.equal("ami-abc123")
|
||||
resp["LaunchTemplateVersions"][1]["LaunchTemplateData"]["ImageId"].should.equal("ami-def456")
|
||||
|
||||
|
||||
@mock_ec2
|
||||
def test_describe_launch_template_versions_with_versions_option():
|
||||
cli = boto3.client("ec2", region_name="us-east-1")
|
||||
|
||||
cli.create_launch_template(
|
||||
LaunchTemplateName="test-template",
|
||||
LaunchTemplateData={
|
||||
"ImageId": "ami-abc123"
|
||||
})
|
||||
|
||||
cli.create_launch_template_version(
|
||||
LaunchTemplateName="test-template",
|
||||
LaunchTemplateData={
|
||||
"ImageId": "ami-def456"
|
||||
},
|
||||
VersionDescription="new ami")
|
||||
|
||||
cli.create_launch_template_version(
|
||||
LaunchTemplateName="test-template",
|
||||
LaunchTemplateData={
|
||||
"ImageId": "ami-hij789"
|
||||
},
|
||||
VersionDescription="new ami, again")
|
||||
|
||||
resp = cli.describe_launch_template_versions(
|
||||
LaunchTemplateName="test-template",
|
||||
Versions=["2", "3"])
|
||||
|
||||
resp["LaunchTemplateVersions"].should.have.length_of(2)
|
||||
resp["LaunchTemplateVersions"][0]["LaunchTemplateData"]["ImageId"].should.equal("ami-def456")
|
||||
resp["LaunchTemplateVersions"][1]["LaunchTemplateData"]["ImageId"].should.equal("ami-hij789")
|
||||
|
||||
|
||||
@mock_ec2
|
||||
def test_describe_launch_template_versions_with_min():
|
||||
cli = boto3.client("ec2", region_name="us-east-1")
|
||||
|
||||
cli.create_launch_template(
|
||||
LaunchTemplateName="test-template",
|
||||
LaunchTemplateData={
|
||||
"ImageId": "ami-abc123"
|
||||
})
|
||||
|
||||
cli.create_launch_template_version(
|
||||
LaunchTemplateName="test-template",
|
||||
LaunchTemplateData={
|
||||
"ImageId": "ami-def456"
|
||||
},
|
||||
VersionDescription="new ami")
|
||||
|
||||
cli.create_launch_template_version(
|
||||
LaunchTemplateName="test-template",
|
||||
LaunchTemplateData={
|
||||
"ImageId": "ami-hij789"
|
||||
},
|
||||
VersionDescription="new ami, again")
|
||||
|
||||
resp = cli.describe_launch_template_versions(
|
||||
LaunchTemplateName="test-template",
|
||||
MinVersion="2")
|
||||
|
||||
resp["LaunchTemplateVersions"].should.have.length_of(2)
|
||||
resp["LaunchTemplateVersions"][0]["LaunchTemplateData"]["ImageId"].should.equal("ami-def456")
|
||||
resp["LaunchTemplateVersions"][1]["LaunchTemplateData"]["ImageId"].should.equal("ami-hij789")
|
||||
|
||||
|
||||
@mock_ec2
|
||||
def test_describe_launch_template_versions_with_max():
|
||||
cli = boto3.client("ec2", region_name="us-east-1")
|
||||
|
||||
cli.create_launch_template(
|
||||
LaunchTemplateName="test-template",
|
||||
LaunchTemplateData={
|
||||
"ImageId": "ami-abc123"
|
||||
})
|
||||
|
||||
cli.create_launch_template_version(
|
||||
LaunchTemplateName="test-template",
|
||||
LaunchTemplateData={
|
||||
"ImageId": "ami-def456"
|
||||
},
|
||||
VersionDescription="new ami")
|
||||
|
||||
cli.create_launch_template_version(
|
||||
LaunchTemplateName="test-template",
|
||||
LaunchTemplateData={
|
||||
"ImageId": "ami-hij789"
|
||||
},
|
||||
VersionDescription="new ami, again")
|
||||
|
||||
resp = cli.describe_launch_template_versions(
|
||||
LaunchTemplateName="test-template",
|
||||
MaxVersion="2")
|
||||
|
||||
resp["LaunchTemplateVersions"].should.have.length_of(2)
|
||||
resp["LaunchTemplateVersions"][0]["LaunchTemplateData"]["ImageId"].should.equal("ami-abc123")
|
||||
resp["LaunchTemplateVersions"][1]["LaunchTemplateData"]["ImageId"].should.equal("ami-def456")
|
||||
|
||||
|
||||
@mock_ec2
|
||||
def test_describe_launch_template_versions_with_min_and_max():
|
||||
cli = boto3.client("ec2", region_name="us-east-1")
|
||||
|
||||
cli.create_launch_template(
|
||||
LaunchTemplateName="test-template",
|
||||
LaunchTemplateData={
|
||||
"ImageId": "ami-abc123"
|
||||
})
|
||||
|
||||
cli.create_launch_template_version(
|
||||
LaunchTemplateName="test-template",
|
||||
LaunchTemplateData={
|
||||
"ImageId": "ami-def456"
|
||||
},
|
||||
VersionDescription="new ami")
|
||||
|
||||
cli.create_launch_template_version(
|
||||
LaunchTemplateName="test-template",
|
||||
LaunchTemplateData={
|
||||
"ImageId": "ami-hij789"
|
||||
},
|
||||
VersionDescription="new ami, again")
|
||||
|
||||
cli.create_launch_template_version(
|
||||
LaunchTemplateName="test-template",
|
||||
LaunchTemplateData={
|
||||
"ImageId": "ami-345abc"
|
||||
},
|
||||
VersionDescription="new ami, because why not")
|
||||
|
||||
resp = cli.describe_launch_template_versions(
|
||||
LaunchTemplateName="test-template",
|
||||
MinVersion="2",
|
||||
MaxVersion="3")
|
||||
|
||||
resp["LaunchTemplateVersions"].should.have.length_of(2)
|
||||
resp["LaunchTemplateVersions"][0]["LaunchTemplateData"]["ImageId"].should.equal("ami-def456")
|
||||
resp["LaunchTemplateVersions"][1]["LaunchTemplateData"]["ImageId"].should.equal("ami-hij789")
|
||||
|
||||
|
||||
@mock_ec2
|
||||
def test_describe_launch_templates():
|
||||
cli = boto3.client("ec2", region_name="us-east-1")
|
||||
|
||||
lt_ids = []
|
||||
r = cli.create_launch_template(
|
||||
LaunchTemplateName="test-template",
|
||||
LaunchTemplateData={
|
||||
"ImageId": "ami-abc123"
|
||||
})
|
||||
lt_ids.append(r["LaunchTemplate"]["LaunchTemplateId"])
|
||||
|
||||
r = cli.create_launch_template(
|
||||
LaunchTemplateName="test-template2",
|
||||
LaunchTemplateData={
|
||||
"ImageId": "ami-abc123"
|
||||
})
|
||||
lt_ids.append(r["LaunchTemplate"]["LaunchTemplateId"])
|
||||
|
||||
# general call, all templates
|
||||
resp = cli.describe_launch_templates()
|
||||
resp.should.have.key("LaunchTemplates")
|
||||
resp["LaunchTemplates"].should.have.length_of(2)
|
||||
resp["LaunchTemplates"][0]["LaunchTemplateName"].should.equal("test-template")
|
||||
resp["LaunchTemplates"][1]["LaunchTemplateName"].should.equal("test-template2")
|
||||
|
||||
# filter by names
|
||||
resp = cli.describe_launch_templates(
|
||||
LaunchTemplateNames=["test-template2", "test-template"])
|
||||
resp.should.have.key("LaunchTemplates")
|
||||
resp["LaunchTemplates"].should.have.length_of(2)
|
||||
resp["LaunchTemplates"][0]["LaunchTemplateName"].should.equal("test-template2")
|
||||
resp["LaunchTemplates"][1]["LaunchTemplateName"].should.equal("test-template")
|
||||
|
||||
# filter by ids
|
||||
resp = cli.describe_launch_templates(LaunchTemplateIds=lt_ids)
|
||||
resp.should.have.key("LaunchTemplates")
|
||||
resp["LaunchTemplates"].should.have.length_of(2)
|
||||
resp["LaunchTemplates"][0]["LaunchTemplateName"].should.equal("test-template")
|
||||
resp["LaunchTemplates"][1]["LaunchTemplateName"].should.equal("test-template2")
|
||||
|
||||
|
||||
@mock_ec2
|
||||
def test_describe_launch_templates_with_filters():
|
||||
cli = boto3.client("ec2", region_name="us-east-1")
|
||||
|
||||
r = cli.create_launch_template(
|
||||
LaunchTemplateName="test-template",
|
||||
LaunchTemplateData={
|
||||
"ImageId": "ami-abc123"
|
||||
})
|
||||
|
||||
cli.create_tags(
|
||||
Resources=[r["LaunchTemplate"]["LaunchTemplateId"]],
|
||||
Tags=[
|
||||
{"Key": "tag1", "Value": "a value"},
|
||||
{"Key": "another-key", "Value": "this value"},
|
||||
])
|
||||
|
||||
cli.create_launch_template(
|
||||
LaunchTemplateName="no-tags",
|
||||
LaunchTemplateData={
|
||||
"ImageId": "ami-abc123"
|
||||
})
|
||||
|
||||
resp = cli.describe_launch_templates(Filters=[{
|
||||
"Name": "tag:tag1", "Values": ["a value"]
|
||||
}])
|
||||
|
||||
resp["LaunchTemplates"].should.have.length_of(1)
|
||||
resp["LaunchTemplates"][0]["LaunchTemplateName"].should.equal("test-template")
|
||||
|
||||
resp = cli.describe_launch_templates(Filters=[{
|
||||
"Name": "launch-template-name", "Values": ["no-tags"]
|
||||
}])
|
||||
resp["LaunchTemplates"].should.have.length_of(1)
|
||||
resp["LaunchTemplates"][0]["LaunchTemplateName"].should.equal("no-tags")
|
||||
|
||||
|
||||
@mock_ec2
|
||||
def test_create_launch_template_with_tag_spec():
|
||||
cli = boto3.client("ec2", region_name="us-east-1")
|
||||
|
||||
cli.create_launch_template(
|
||||
LaunchTemplateName="test-template",
|
||||
LaunchTemplateData={"ImageId": "ami-abc123"},
|
||||
TagSpecifications=[{
|
||||
"ResourceType": "instance",
|
||||
"Tags": [
|
||||
{"Key": "key", "Value": "value"}
|
||||
]
|
||||
}],
|
||||
)
|
||||
|
||||
resp = cli.describe_launch_template_versions(
|
||||
LaunchTemplateName="test-template",
|
||||
Versions=["1"])
|
||||
version = resp["LaunchTemplateVersions"][0]
|
||||
|
||||
version["LaunchTemplateData"].should.have.key("TagSpecifications")
|
||||
version["LaunchTemplateData"]["TagSpecifications"].should.have.length_of(1)
|
||||
version["LaunchTemplateData"]["TagSpecifications"][0].should.equal({
|
||||
"ResourceType": "instance",
|
||||
"Tags": [
|
||||
{"Key": "key", "Value": "value"}
|
||||
]
|
||||
})
|
@ -1,4 +1,5 @@
|
||||
from __future__ import unicode_literals
|
||||
from datetime import datetime
|
||||
|
||||
from copy import deepcopy
|
||||
|
||||
@ -477,6 +478,8 @@ def test_describe_services():
|
||||
response['services'][0]['deployments'][0]['pendingCount'].should.equal(2)
|
||||
response['services'][0]['deployments'][0]['runningCount'].should.equal(0)
|
||||
response['services'][0]['deployments'][0]['status'].should.equal('PRIMARY')
|
||||
(datetime.now() - response['services'][0]['deployments'][0]["createdAt"].replace(tzinfo=None)).seconds.should.be.within(0, 10)
|
||||
(datetime.now() - response['services'][0]['deployments'][0]["updatedAt"].replace(tzinfo=None)).seconds.should.be.within(0, 10)
|
||||
|
||||
|
||||
@mock_ecs
|
||||
|
@ -1811,3 +1811,132 @@ def test_redirect_action_listener_rule_cloudformation():
|
||||
'Port': '443', 'Protocol': 'HTTPS', 'StatusCode': 'HTTP_301',
|
||||
}
|
||||
},])
|
||||
|
||||
|
||||
@mock_elbv2
|
||||
@mock_ec2
|
||||
def test_cognito_action_listener_rule():
|
||||
conn = boto3.client('elbv2', region_name='us-east-1')
|
||||
ec2 = boto3.resource('ec2', region_name='us-east-1')
|
||||
|
||||
security_group = ec2.create_security_group(
|
||||
GroupName='a-security-group', Description='First One')
|
||||
vpc = ec2.create_vpc(CidrBlock='172.28.7.0/24', InstanceTenancy='default')
|
||||
subnet1 = ec2.create_subnet(
|
||||
VpcId=vpc.id,
|
||||
CidrBlock='172.28.7.192/26',
|
||||
AvailabilityZone='us-east-1a')
|
||||
subnet2 = ec2.create_subnet(
|
||||
VpcId=vpc.id,
|
||||
CidrBlock='172.28.7.128/26',
|
||||
AvailabilityZone='us-east-1b')
|
||||
|
||||
response = conn.create_load_balancer(
|
||||
Name='my-lb',
|
||||
Subnets=[subnet1.id, subnet2.id],
|
||||
SecurityGroups=[security_group.id],
|
||||
Scheme='internal',
|
||||
Tags=[{'Key': 'key_name', 'Value': 'a_value'}])
|
||||
load_balancer_arn = response.get('LoadBalancers')[0].get('LoadBalancerArn')
|
||||
|
||||
action = {
|
||||
'Type': 'authenticate-cognito',
|
||||
'AuthenticateCognitoConfig': {
|
||||
'UserPoolArn': 'arn:aws:cognito-idp:us-east-1:123456789012:userpool/us-east-1_ABCD1234',
|
||||
'UserPoolClientId': 'abcd1234abcd',
|
||||
'UserPoolDomain': 'testpool',
|
||||
}
|
||||
}
|
||||
response = conn.create_listener(LoadBalancerArn=load_balancer_arn,
|
||||
Protocol='HTTP',
|
||||
Port=80,
|
||||
DefaultActions=[action])
|
||||
|
||||
listener = response.get('Listeners')[0]
|
||||
listener.get('DefaultActions')[0].should.equal(action)
|
||||
listener_arn = listener.get('ListenerArn')
|
||||
|
||||
describe_rules_response = conn.describe_rules(ListenerArn=listener_arn)
|
||||
describe_rules_response['Rules'][0]['Actions'][0].should.equal(action)
|
||||
|
||||
describe_listener_response = conn.describe_listeners(ListenerArns=[listener_arn, ])
|
||||
describe_listener_actions = describe_listener_response['Listeners'][0]['DefaultActions'][0]
|
||||
describe_listener_actions.should.equal(action)
|
||||
|
||||
|
||||
@mock_elbv2
|
||||
@mock_cloudformation
|
||||
def test_cognito_action_listener_rule_cloudformation():
|
||||
cnf_conn = boto3.client('cloudformation', region_name='us-east-1')
|
||||
elbv2_client = boto3.client('elbv2', region_name='us-east-1')
|
||||
|
||||
template = {
|
||||
"AWSTemplateFormatVersion": "2010-09-09",
|
||||
"Description": "ECS Cluster Test CloudFormation",
|
||||
"Resources": {
|
||||
"testVPC": {
|
||||
"Type": "AWS::EC2::VPC",
|
||||
"Properties": {
|
||||
"CidrBlock": "10.0.0.0/16",
|
||||
},
|
||||
},
|
||||
"subnet1": {
|
||||
"Type": "AWS::EC2::Subnet",
|
||||
"Properties": {
|
||||
"CidrBlock": "10.0.0.0/24",
|
||||
"VpcId": {"Ref": "testVPC"},
|
||||
"AvalabilityZone": "us-east-1b",
|
||||
},
|
||||
},
|
||||
"subnet2": {
|
||||
"Type": "AWS::EC2::Subnet",
|
||||
"Properties": {
|
||||
"CidrBlock": "10.0.1.0/24",
|
||||
"VpcId": {"Ref": "testVPC"},
|
||||
"AvalabilityZone": "us-east-1b",
|
||||
},
|
||||
},
|
||||
"testLb": {
|
||||
"Type": "AWS::ElasticLoadBalancingV2::LoadBalancer",
|
||||
"Properties": {
|
||||
"Name": "my-lb",
|
||||
"Subnets": [{"Ref": "subnet1"}, {"Ref": "subnet2"}],
|
||||
"Type": "application",
|
||||
"SecurityGroups": [],
|
||||
}
|
||||
},
|
||||
"testListener": {
|
||||
"Type": "AWS::ElasticLoadBalancingV2::Listener",
|
||||
"Properties": {
|
||||
"LoadBalancerArn": {"Ref": "testLb"},
|
||||
"Port": 80,
|
||||
"Protocol": "HTTP",
|
||||
"DefaultActions": [{
|
||||
"Type": "authenticate-cognito",
|
||||
"AuthenticateCognitoConfig": {
|
||||
'UserPoolArn': 'arn:aws:cognito-idp:us-east-1:123456789012:userpool/us-east-1_ABCD1234',
|
||||
'UserPoolClientId': 'abcd1234abcd',
|
||||
'UserPoolDomain': 'testpool',
|
||||
}
|
||||
}]
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
}
|
||||
template_json = json.dumps(template)
|
||||
cnf_conn.create_stack(StackName="test-stack", TemplateBody=template_json)
|
||||
|
||||
describe_load_balancers_response = elbv2_client.describe_load_balancers(Names=['my-lb',])
|
||||
load_balancer_arn = describe_load_balancers_response['LoadBalancers'][0]['LoadBalancerArn']
|
||||
describe_listeners_response = elbv2_client.describe_listeners(LoadBalancerArn=load_balancer_arn)
|
||||
|
||||
describe_listeners_response['Listeners'].should.have.length_of(1)
|
||||
describe_listeners_response['Listeners'][0]['DefaultActions'].should.equal([{
|
||||
'Type': 'authenticate-cognito',
|
||||
"AuthenticateCognitoConfig": {
|
||||
'UserPoolArn': 'arn:aws:cognito-idp:us-east-1:123456789012:userpool/us-east-1_ABCD1234',
|
||||
'UserPoolClientId': 'abcd1234abcd',
|
||||
'UserPoolDomain': 'testpool',
|
||||
}
|
||||
},])
|
||||
|
@ -944,7 +944,8 @@ def test_get_account_authorization_details():
|
||||
})
|
||||
|
||||
conn = boto3.client('iam', region_name='us-east-1')
|
||||
conn.create_role(RoleName="my-role", AssumeRolePolicyDocument="some policy", Path="/my-path/")
|
||||
boundary = 'arn:aws:iam::123456789012:policy/boundary'
|
||||
conn.create_role(RoleName="my-role", AssumeRolePolicyDocument="some policy", Path="/my-path/", Description='testing', PermissionsBoundary=boundary)
|
||||
conn.create_user(Path='/', UserName='testUser')
|
||||
conn.create_group(Path='/', GroupName='testGroup')
|
||||
conn.create_policy(
|
||||
@ -985,6 +986,11 @@ def test_get_account_authorization_details():
|
||||
assert len(result['GroupDetailList']) == 0
|
||||
assert len(result['Policies']) == 0
|
||||
assert len(result['RoleDetailList'][0]['InstanceProfileList']) == 1
|
||||
assert result['RoleDetailList'][0]['InstanceProfileList'][0]['Roles'][0]['Description'] == 'testing'
|
||||
assert result['RoleDetailList'][0]['InstanceProfileList'][0]['Roles'][0]['PermissionsBoundary'] == {
|
||||
'PermissionsBoundaryType': 'PermissionsBoundaryPolicy',
|
||||
'PermissionsBoundaryArn': 'arn:aws:iam::123456789012:policy/boundary'
|
||||
}
|
||||
assert len(result['RoleDetailList'][0]['Tags']) == 2
|
||||
assert len(result['RoleDetailList'][0]['RolePolicyList']) == 1
|
||||
assert len(result['RoleDetailList'][0]['AttachedManagedPolicies']) == 1
|
||||
@ -1151,6 +1157,79 @@ def test_delete_saml_provider():
|
||||
assert not resp['Certificates']
|
||||
|
||||
|
||||
@mock_iam()
|
||||
def test_create_role_with_tags():
|
||||
"""Tests both the tag_role and get_role_tags capability"""
|
||||
conn = boto3.client('iam', region_name='us-east-1')
|
||||
conn.create_role(RoleName="my-role", AssumeRolePolicyDocument="{}", Tags=[
|
||||
{
|
||||
'Key': 'somekey',
|
||||
'Value': 'somevalue'
|
||||
},
|
||||
{
|
||||
'Key': 'someotherkey',
|
||||
'Value': 'someothervalue'
|
||||
}
|
||||
], Description='testing')
|
||||
|
||||
# Get role:
|
||||
role = conn.get_role(RoleName='my-role')['Role']
|
||||
assert len(role['Tags']) == 2
|
||||
assert role['Tags'][0]['Key'] == 'somekey'
|
||||
assert role['Tags'][0]['Value'] == 'somevalue'
|
||||
assert role['Tags'][1]['Key'] == 'someotherkey'
|
||||
assert role['Tags'][1]['Value'] == 'someothervalue'
|
||||
assert role['Description'] == 'testing'
|
||||
|
||||
# Empty is good:
|
||||
conn.create_role(RoleName="my-role2", AssumeRolePolicyDocument="{}", Tags=[
|
||||
{
|
||||
'Key': 'somekey',
|
||||
'Value': ''
|
||||
}
|
||||
])
|
||||
tags = conn.list_role_tags(RoleName='my-role2')
|
||||
assert len(tags['Tags']) == 1
|
||||
assert tags['Tags'][0]['Key'] == 'somekey'
|
||||
assert tags['Tags'][0]['Value'] == ''
|
||||
|
||||
# Test creating tags with invalid values:
|
||||
# With more than 50 tags:
|
||||
with assert_raises(ClientError) as ce:
|
||||
too_many_tags = list(map(lambda x: {'Key': str(x), 'Value': str(x)}, range(0, 51)))
|
||||
conn.create_role(RoleName="my-role3", AssumeRolePolicyDocument="{}", Tags=too_many_tags)
|
||||
assert 'failed to satisfy constraint: Member must have length less than or equal to 50.' \
|
||||
in ce.exception.response['Error']['Message']
|
||||
|
||||
# With a duplicate tag:
|
||||
with assert_raises(ClientError) as ce:
|
||||
conn.create_role(RoleName="my-role3", AssumeRolePolicyDocument="{}", Tags=[{'Key': '0', 'Value': ''}, {'Key': '0', 'Value': ''}])
|
||||
assert 'Duplicate tag keys found. Please note that Tag keys are case insensitive.' \
|
||||
in ce.exception.response['Error']['Message']
|
||||
|
||||
# Duplicate tag with different casing:
|
||||
with assert_raises(ClientError) as ce:
|
||||
conn.create_role(RoleName="my-role3", AssumeRolePolicyDocument="{}", Tags=[{'Key': 'a', 'Value': ''}, {'Key': 'A', 'Value': ''}])
|
||||
assert 'Duplicate tag keys found. Please note that Tag keys are case insensitive.' \
|
||||
in ce.exception.response['Error']['Message']
|
||||
|
||||
# With a really big key:
|
||||
with assert_raises(ClientError) as ce:
|
||||
conn.create_role(RoleName="my-role3", AssumeRolePolicyDocument="{}", Tags=[{'Key': '0' * 129, 'Value': ''}])
|
||||
assert 'Member must have length less than or equal to 128.' in ce.exception.response['Error']['Message']
|
||||
|
||||
# With a really big value:
|
||||
with assert_raises(ClientError) as ce:
|
||||
conn.create_role(RoleName="my-role3", AssumeRolePolicyDocument="{}", Tags=[{'Key': '0', 'Value': '0' * 257}])
|
||||
assert 'Member must have length less than or equal to 256.' in ce.exception.response['Error']['Message']
|
||||
|
||||
# With an invalid character:
|
||||
with assert_raises(ClientError) as ce:
|
||||
conn.create_role(RoleName="my-role3", AssumeRolePolicyDocument="{}", Tags=[{'Key': 'NOWAY!', 'Value': ''}])
|
||||
assert 'Member must satisfy regular expression pattern: [\\p{L}\\p{Z}\\p{N}_.:/=+\\-@]+' \
|
||||
in ce.exception.response['Error']['Message']
|
||||
|
||||
|
||||
@mock_iam()
|
||||
def test_tag_role():
|
||||
"""Tests both the tag_role and get_role_tags capability"""
|
||||
@ -1338,6 +1417,7 @@ def test_update_role_description():
|
||||
|
||||
assert response['Role']['RoleName'] == 'my-role'
|
||||
|
||||
|
||||
@mock_iam()
|
||||
def test_update_role():
|
||||
conn = boto3.client('iam', region_name='us-east-1')
|
||||
@ -1349,6 +1429,7 @@ def test_update_role():
|
||||
response = conn.update_role_description(RoleName="my-role", Description="test")
|
||||
assert response['Role']['RoleName'] == 'my-role'
|
||||
|
||||
|
||||
@mock_iam()
|
||||
def test_update_role():
|
||||
conn = boto3.client('iam', region_name='us-east-1')
|
||||
@ -1443,6 +1524,8 @@ def test_create_role_no_path():
|
||||
resp = conn.create_role(RoleName='my-role', AssumeRolePolicyDocument='some policy', Description='test')
|
||||
resp.get('Role').get('Arn').should.equal('arn:aws:iam::123456789012:role/my-role')
|
||||
resp.get('Role').should_not.have.key('PermissionsBoundary')
|
||||
resp.get('Role').get('Description').should.equal('test')
|
||||
|
||||
|
||||
@mock_iam()
|
||||
def test_create_role_with_permissions_boundary():
|
||||
@ -1454,6 +1537,7 @@ def test_create_role_with_permissions_boundary():
|
||||
'PermissionsBoundaryArn': boundary
|
||||
}
|
||||
resp.get('Role').get('PermissionsBoundary').should.equal(expected)
|
||||
resp.get('Role').get('Description').should.equal('test')
|
||||
|
||||
invalid_boundary_arn = 'arn:aws:iam::123456789:not_a_boundary'
|
||||
with assert_raises(ClientError):
|
||||
|
@ -191,6 +191,7 @@ def test_decrypt():
|
||||
conn = boto.kms.connect_to_region('us-west-2')
|
||||
response = conn.decrypt('ZW5jcnlwdG1l'.encode('utf-8'))
|
||||
response['Plaintext'].should.equal(b'encryptme')
|
||||
response['KeyId'].should.equal('key_id')
|
||||
|
||||
|
||||
@mock_kms_deprecated
|
||||
|
@ -162,3 +162,63 @@ def test_delete_retention_policy():
|
||||
|
||||
response = conn.delete_log_group(logGroupName=log_group_name)
|
||||
|
||||
|
||||
@mock_logs
|
||||
def test_get_log_events():
|
||||
conn = boto3.client('logs', 'us-west-2')
|
||||
log_group_name = 'test'
|
||||
log_stream_name = 'stream'
|
||||
conn.create_log_group(logGroupName=log_group_name)
|
||||
conn.create_log_stream(
|
||||
logGroupName=log_group_name,
|
||||
logStreamName=log_stream_name
|
||||
)
|
||||
|
||||
events = [{'timestamp': x, 'message': str(x)} for x in range(20)]
|
||||
|
||||
conn.put_log_events(
|
||||
logGroupName=log_group_name,
|
||||
logStreamName=log_stream_name,
|
||||
logEvents=events
|
||||
)
|
||||
|
||||
resp = conn.get_log_events(
|
||||
logGroupName=log_group_name,
|
||||
logStreamName=log_stream_name,
|
||||
limit=10)
|
||||
|
||||
resp['events'].should.have.length_of(10)
|
||||
resp.should.have.key('nextForwardToken')
|
||||
resp.should.have.key('nextBackwardToken')
|
||||
for i in range(10):
|
||||
resp['events'][i]['timestamp'].should.equal(i)
|
||||
resp['events'][i]['message'].should.equal(str(i))
|
||||
|
||||
next_token = resp['nextForwardToken']
|
||||
|
||||
resp = conn.get_log_events(
|
||||
logGroupName=log_group_name,
|
||||
logStreamName=log_stream_name,
|
||||
nextToken=next_token,
|
||||
limit=10)
|
||||
|
||||
resp['events'].should.have.length_of(10)
|
||||
resp.should.have.key('nextForwardToken')
|
||||
resp.should.have.key('nextBackwardToken')
|
||||
resp['nextForwardToken'].should.equal(next_token)
|
||||
for i in range(10):
|
||||
resp['events'][i]['timestamp'].should.equal(i+10)
|
||||
resp['events'][i]['message'].should.equal(str(i+10))
|
||||
|
||||
resp = conn.get_log_events(
|
||||
logGroupName=log_group_name,
|
||||
logStreamName=log_stream_name,
|
||||
nextToken=resp['nextBackwardToken'],
|
||||
limit=10)
|
||||
|
||||
resp['events'].should.have.length_of(10)
|
||||
resp.should.have.key('nextForwardToken')
|
||||
resp.should.have.key('nextBackwardToken')
|
||||
for i in range(10):
|
||||
resp['events'][i]['timestamp'].should.equal(i)
|
||||
resp['events'][i]['message'].should.equal(str(i))
|
||||
|
@ -1,7 +1,6 @@
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import six
|
||||
import sure # noqa
|
||||
import datetime
|
||||
from moto.organizations import utils
|
||||
|
||||
|
@ -3,7 +3,6 @@ from __future__ import unicode_literals
|
||||
import boto3
|
||||
import json
|
||||
import six
|
||||
import sure # noqa
|
||||
from botocore.exceptions import ClientError
|
||||
from nose.tools import assert_raises
|
||||
|
||||
@ -27,6 +26,25 @@ def test_create_organization():
|
||||
validate_organization(response)
|
||||
response['Organization']['FeatureSet'].should.equal('ALL')
|
||||
|
||||
response = client.list_accounts()
|
||||
len(response['Accounts']).should.equal(1)
|
||||
response['Accounts'][0]['Name'].should.equal('master')
|
||||
response['Accounts'][0]['Id'].should.equal(utils.MASTER_ACCOUNT_ID)
|
||||
response['Accounts'][0]['Email'].should.equal(utils.MASTER_ACCOUNT_EMAIL)
|
||||
|
||||
response = client.list_policies(Filter='SERVICE_CONTROL_POLICY')
|
||||
len(response['Policies']).should.equal(1)
|
||||
response['Policies'][0]['Name'].should.equal('FullAWSAccess')
|
||||
response['Policies'][0]['Id'].should.equal(utils.DEFAULT_POLICY_ID)
|
||||
response['Policies'][0]['AwsManaged'].should.equal(True)
|
||||
|
||||
response = client.list_targets_for_policy(PolicyId=utils.DEFAULT_POLICY_ID)
|
||||
len(response['Targets']).should.equal(2)
|
||||
root_ou = [t for t in response['Targets'] if t['Type'] == 'ROOT'][0]
|
||||
root_ou['Name'].should.equal('Root')
|
||||
master_account = [t for t in response['Targets'] if t['Type'] == 'ACCOUNT'][0]
|
||||
master_account['Name'].should.equal('master')
|
||||
|
||||
|
||||
@mock_organizations
|
||||
def test_describe_organization():
|
||||
@ -177,11 +195,11 @@ def test_list_accounts():
|
||||
response = client.list_accounts()
|
||||
response.should.have.key('Accounts')
|
||||
accounts = response['Accounts']
|
||||
len(accounts).should.equal(5)
|
||||
len(accounts).should.equal(6)
|
||||
for account in accounts:
|
||||
validate_account(org, account)
|
||||
accounts[3]['Name'].should.equal(mockname + '3')
|
||||
accounts[2]['Email'].should.equal(mockname + '2' + '@' + mockdomain)
|
||||
accounts[4]['Name'].should.equal(mockname + '3')
|
||||
accounts[3]['Email'].should.equal(mockname + '2' + '@' + mockdomain)
|
||||
|
||||
|
||||
@mock_organizations
|
||||
@ -291,8 +309,10 @@ def test_list_children():
|
||||
response02 = client.list_children(ParentId=root_id, ChildType='ORGANIZATIONAL_UNIT')
|
||||
response03 = client.list_children(ParentId=ou01_id, ChildType='ACCOUNT')
|
||||
response04 = client.list_children(ParentId=ou01_id, ChildType='ORGANIZATIONAL_UNIT')
|
||||
response01['Children'][0]['Id'].should.equal(account01_id)
|
||||
response01['Children'][0]['Id'].should.equal(utils.MASTER_ACCOUNT_ID)
|
||||
response01['Children'][0]['Type'].should.equal('ACCOUNT')
|
||||
response01['Children'][1]['Id'].should.equal(account01_id)
|
||||
response01['Children'][1]['Type'].should.equal('ACCOUNT')
|
||||
response02['Children'][0]['Id'].should.equal(ou01_id)
|
||||
response02['Children'][0]['Type'].should.equal('ORGANIZATIONAL_UNIT')
|
||||
response03['Children'][0]['Id'].should.equal(account02_id)
|
||||
@ -591,4 +611,3 @@ def test_list_targets_for_policy_exception():
|
||||
ex.operation_name.should.equal('ListTargetsForPolicy')
|
||||
ex.response['Error']['Code'].should.equal('400')
|
||||
ex.response['Error']['Message'].should.contain('InvalidInputException')
|
||||
|
||||
|
@ -36,6 +36,7 @@ def test_create_cluster_boto3():
|
||||
response['Cluster']['NodeType'].should.equal('ds2.xlarge')
|
||||
create_time = response['Cluster']['ClusterCreateTime']
|
||||
create_time.should.be.lower_than(datetime.datetime.now(create_time.tzinfo))
|
||||
create_time.should.be.greater_than(datetime.datetime.now(create_time.tzinfo) - datetime.timedelta(minutes=1))
|
||||
|
||||
|
||||
@mock_redshift
|
||||
|
@ -1,16 +1,12 @@
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import boto
|
||||
import boto3
|
||||
from boto.exception import S3CreateError, S3ResponseError
|
||||
from boto.s3.lifecycle import Lifecycle, Transition, Expiration, Rule
|
||||
|
||||
import sure # noqa
|
||||
from botocore.exceptions import ClientError
|
||||
from datetime import datetime
|
||||
from nose.tools import assert_raises
|
||||
|
||||
from moto import mock_s3_deprecated, mock_s3
|
||||
from moto import mock_s3
|
||||
|
||||
|
||||
@mock_s3
|
||||
@ -41,6 +37,18 @@ def test_s3_storage_class_infrequent_access():
|
||||
D['Contents'][0]["StorageClass"].should.equal("STANDARD_IA")
|
||||
|
||||
|
||||
@mock_s3
|
||||
def test_s3_storage_class_intelligent_tiering():
|
||||
s3 = boto3.client("s3")
|
||||
|
||||
s3.create_bucket(Bucket="Bucket")
|
||||
s3.put_object(Bucket="Bucket", Key="my_key_infrequent", Body="my_value_infrequent", StorageClass="INTELLIGENT_TIERING")
|
||||
|
||||
objects = s3.list_objects(Bucket="Bucket")
|
||||
|
||||
objects['Contents'][0]["StorageClass"].should.equal("INTELLIGENT_TIERING")
|
||||
|
||||
|
||||
@mock_s3
|
||||
def test_s3_storage_class_copy():
|
||||
s3 = boto3.client("s3")
|
||||
@ -90,6 +98,7 @@ def test_s3_invalid_storage_class():
|
||||
e.response["Error"]["Code"].should.equal("InvalidStorageClass")
|
||||
e.response["Error"]["Message"].should.equal("The storage class you specified is not valid")
|
||||
|
||||
|
||||
@mock_s3
|
||||
def test_s3_default_storage_class():
|
||||
s3 = boto3.client("s3")
|
||||
@ -103,4 +112,27 @@ def test_s3_default_storage_class():
|
||||
list_of_objects["Contents"][0]["StorageClass"].should.equal("STANDARD")
|
||||
|
||||
|
||||
@mock_s3
|
||||
def test_s3_copy_object_error_for_glacier_storage_class():
|
||||
s3 = boto3.client("s3")
|
||||
s3.create_bucket(Bucket="Bucket")
|
||||
|
||||
s3.put_object(Bucket="Bucket", Key="First_Object", Body="Body", StorageClass="GLACIER")
|
||||
|
||||
with assert_raises(ClientError) as exc:
|
||||
s3.copy_object(CopySource={"Bucket": "Bucket", "Key": "First_Object"}, Bucket="Bucket", Key="Second_Object")
|
||||
|
||||
exc.exception.response["Error"]["Code"].should.equal("ObjectNotInActiveTierError")
|
||||
|
||||
|
||||
@mock_s3
|
||||
def test_s3_copy_object_error_for_deep_archive_storage_class():
|
||||
s3 = boto3.client("s3")
|
||||
s3.create_bucket(Bucket="Bucket")
|
||||
|
||||
s3.put_object(Bucket="Bucket", Key="First_Object", Body="Body", StorageClass="DEEP_ARCHIVE")
|
||||
|
||||
with assert_raises(ClientError) as exc:
|
||||
s3.copy_object(CopySource={"Bucket": "Bucket", "Key": "First_Object"}, Bucket="Bucket", Key="Second_Object")
|
||||
|
||||
exc.exception.response["Error"]["Code"].should.equal("ObjectNotInActiveTierError")
|
||||
|
@ -8,7 +8,9 @@ from freezegun import freeze_time
|
||||
from nose.tools import assert_raises
|
||||
import sure # noqa
|
||||
|
||||
from moto import mock_sts, mock_sts_deprecated
|
||||
|
||||
from moto import mock_sts, mock_sts_deprecated, mock_iam, settings
|
||||
from moto.iam.models import ACCOUNT_ID
|
||||
from moto.sts.responses import MAX_FEDERATION_TOKEN_POLICY_LENGTH
|
||||
|
||||
|
||||
@ -29,7 +31,8 @@ def test_get_session_token():
|
||||
@mock_sts_deprecated
|
||||
def test_get_federation_token():
|
||||
conn = boto.connect_sts()
|
||||
token = conn.get_federation_token(duration=123, name="Bob")
|
||||
token_name = "Bob"
|
||||
token = conn.get_federation_token(duration=123, name=token_name)
|
||||
|
||||
token.credentials.expiration.should.equal('2012-01-01T12:02:03.000Z')
|
||||
token.credentials.session_token.should.equal(
|
||||
@ -38,15 +41,17 @@ def test_get_federation_token():
|
||||
token.credentials.secret_key.should.equal(
|
||||
"wJalrXUtnFEMI/K7MDENG/bPxRfiCYzEXAMPLEKEY")
|
||||
token.federated_user_arn.should.equal(
|
||||
"arn:aws:sts::123456789012:federated-user/Bob")
|
||||
token.federated_user_id.should.equal("123456789012:Bob")
|
||||
"arn:aws:sts::{account_id}:federated-user/{token_name}".format(account_id=ACCOUNT_ID, token_name=token_name))
|
||||
token.federated_user_id.should.equal(str(ACCOUNT_ID) + ":" + token_name)
|
||||
|
||||
|
||||
@freeze_time("2012-01-01 12:00:00")
|
||||
@mock_sts_deprecated
|
||||
@mock_sts
|
||||
def test_assume_role():
|
||||
conn = boto.connect_sts()
|
||||
client = boto3.client(
|
||||
"sts", region_name='us-east-1')
|
||||
|
||||
session_name = "session-name"
|
||||
policy = json.dumps({
|
||||
"Statement": [
|
||||
{
|
||||
@ -61,20 +66,25 @@ def test_assume_role():
|
||||
},
|
||||
]
|
||||
})
|
||||
s3_role = "arn:aws:iam::123456789012:role/test-role"
|
||||
role = conn.assume_role(s3_role, "session-name",
|
||||
policy, duration_seconds=123)
|
||||
role_name = "test-role"
|
||||
s3_role = "arn:aws:iam::{account_id}:role/{role_name}".format(account_id=ACCOUNT_ID, role_name=role_name)
|
||||
assume_role_response = client.assume_role(RoleArn=s3_role, RoleSessionName=session_name,
|
||||
Policy=policy, DurationSeconds=900)
|
||||
|
||||
credentials = role.credentials
|
||||
credentials.expiration.should.equal('2012-01-01T12:02:03.000Z')
|
||||
credentials.session_token.should.have.length_of(356)
|
||||
assert credentials.session_token.startswith("FQoGZXIvYXdzE")
|
||||
credentials.access_key.should.have.length_of(20)
|
||||
assert credentials.access_key.startswith("ASIA")
|
||||
credentials.secret_key.should.have.length_of(40)
|
||||
credentials = assume_role_response['Credentials']
|
||||
if not settings.TEST_SERVER_MODE:
|
||||
credentials['Expiration'].isoformat().should.equal('2012-01-01T12:15:00+00:00')
|
||||
credentials['SessionToken'].should.have.length_of(356)
|
||||
assert credentials['SessionToken'].startswith("FQoGZXIvYXdzE")
|
||||
credentials['AccessKeyId'].should.have.length_of(20)
|
||||
assert credentials['AccessKeyId'].startswith("ASIA")
|
||||
credentials['SecretAccessKey'].should.have.length_of(40)
|
||||
|
||||
role.user.arn.should.equal("arn:aws:iam::123456789012:role/test-role")
|
||||
role.user.assume_role_id.should.contain("session-name")
|
||||
assume_role_response['AssumedRoleUser']['Arn'].should.equal("arn:aws:sts::{account_id}:assumed-role/{role_name}/{session_name}".format(
|
||||
account_id=ACCOUNT_ID, role_name=role_name, session_name=session_name))
|
||||
assert assume_role_response['AssumedRoleUser']['AssumedRoleId'].startswith("AROA")
|
||||
assert assume_role_response['AssumedRoleUser']['AssumedRoleId'].endswith(":" + session_name)
|
||||
assume_role_response['AssumedRoleUser']['AssumedRoleId'].should.have.length_of(21 + 1 + len(session_name))
|
||||
|
||||
|
||||
@freeze_time("2012-01-01 12:00:00")
|
||||
@ -96,9 +106,11 @@ def test_assume_role_with_web_identity():
|
||||
},
|
||||
]
|
||||
})
|
||||
s3_role = "arn:aws:iam::123456789012:role/test-role"
|
||||
role_name = "test-role"
|
||||
s3_role = "arn:aws:iam::{account_id}:role/{role_name}".format(account_id=ACCOUNT_ID, role_name=role_name)
|
||||
session_name = "session-name"
|
||||
role = conn.assume_role_with_web_identity(
|
||||
s3_role, "session-name", policy, duration_seconds=123)
|
||||
s3_role, session_name, policy, duration_seconds=123)
|
||||
|
||||
credentials = role.credentials
|
||||
credentials.expiration.should.equal('2012-01-01T12:02:03.000Z')
|
||||
@ -108,18 +120,68 @@ def test_assume_role_with_web_identity():
|
||||
assert credentials.access_key.startswith("ASIA")
|
||||
credentials.secret_key.should.have.length_of(40)
|
||||
|
||||
role.user.arn.should.equal("arn:aws:iam::123456789012:role/test-role")
|
||||
role.user.arn.should.equal("arn:aws:sts::{account_id}:assumed-role/{role_name}/{session_name}".format(
|
||||
account_id=ACCOUNT_ID, role_name=role_name, session_name=session_name))
|
||||
role.user.assume_role_id.should.contain("session-name")
|
||||
|
||||
|
||||
@mock_sts
|
||||
def test_get_caller_identity():
|
||||
def test_get_caller_identity_with_default_credentials():
|
||||
identity = boto3.client(
|
||||
"sts", region_name='us-east-1').get_caller_identity()
|
||||
|
||||
identity['Arn'].should.equal('arn:aws:sts::123456789012:user/moto')
|
||||
identity['Arn'].should.equal('arn:aws:sts::{account_id}:user/moto'.format(account_id=ACCOUNT_ID))
|
||||
identity['UserId'].should.equal('AKIAIOSFODNN7EXAMPLE')
|
||||
identity['Account'].should.equal('123456789012')
|
||||
identity['Account'].should.equal(str(ACCOUNT_ID))
|
||||
|
||||
|
||||
@mock_sts
|
||||
@mock_iam
|
||||
def test_get_caller_identity_with_iam_user_credentials():
|
||||
iam_client = boto3.client("iam", region_name='us-east-1')
|
||||
iam_user_name = "new-user"
|
||||
iam_user = iam_client.create_user(UserName=iam_user_name)['User']
|
||||
access_key = iam_client.create_access_key(UserName=iam_user_name)['AccessKey']
|
||||
|
||||
identity = boto3.client(
|
||||
"sts", region_name='us-east-1', aws_access_key_id=access_key['AccessKeyId'],
|
||||
aws_secret_access_key=access_key['SecretAccessKey']).get_caller_identity()
|
||||
|
||||
identity['Arn'].should.equal(iam_user['Arn'])
|
||||
identity['UserId'].should.equal(iam_user['UserId'])
|
||||
identity['Account'].should.equal(str(ACCOUNT_ID))
|
||||
|
||||
|
||||
@mock_sts
|
||||
@mock_iam
|
||||
def test_get_caller_identity_with_assumed_role_credentials():
|
||||
iam_client = boto3.client("iam", region_name='us-east-1')
|
||||
sts_client = boto3.client("sts", region_name='us-east-1')
|
||||
iam_role_name = "new-user"
|
||||
trust_policy_document = {
|
||||
"Version": "2012-10-17",
|
||||
"Statement": {
|
||||
"Effect": "Allow",
|
||||
"Principal": {"AWS": "arn:aws:iam::{account_id}:root".format(account_id=ACCOUNT_ID)},
|
||||
"Action": "sts:AssumeRole"
|
||||
}
|
||||
}
|
||||
iam_role_arn = iam_client.role_arn = iam_client.create_role(
|
||||
RoleName=iam_role_name,
|
||||
AssumeRolePolicyDocument=json.dumps(trust_policy_document)
|
||||
)['Role']['Arn']
|
||||
session_name = "new-session"
|
||||
assumed_role = sts_client.assume_role(RoleArn=iam_role_arn,
|
||||
RoleSessionName=session_name)
|
||||
access_key = assumed_role['Credentials']
|
||||
|
||||
identity = boto3.client(
|
||||
"sts", region_name='us-east-1', aws_access_key_id=access_key['AccessKeyId'],
|
||||
aws_secret_access_key=access_key['SecretAccessKey']).get_caller_identity()
|
||||
|
||||
identity['Arn'].should.equal(assumed_role['AssumedRoleUser']['Arn'])
|
||||
identity['UserId'].should.equal(assumed_role['AssumedRoleUser']['AssumedRoleId'])
|
||||
identity['Account'].should.equal(str(ACCOUNT_ID))
|
||||
|
||||
|
||||
@mock_sts
|
||||
|
Loading…
x
Reference in New Issue
Block a user