Merge branch 'master' into feature/2546

This commit is contained in:
Bert Blommers 2019-11-19 08:00:59 +00:00
commit 0ea98f22ee
44 changed files with 2383 additions and 218 deletions

View File

@ -35,9 +35,10 @@ install:
if [ "$TEST_SERVER_MODE" = "true" ]; then if [ "$TEST_SERVER_MODE" = "true" ]; then
python wait_for.py python wait_for.py
fi fi
before_script:
- if [[ $TRAVIS_PYTHON_VERSION == "3.7" ]]; then make lint; fi
script: script:
- make test-only - make test-only
- if [[ $TRAVIS_PYTHON_VERSION == "3.7" ]]; then make lint; fi
after_success: after_success:
- coveralls - coveralls
before_deploy: before_deploy:

View File

@ -1,6 +1,189 @@
Moto Changelog Moto Changelog
=================== ===================
1.3.14
-----
General Changes:
* Support for Python 3.8
* Linting: Black is now enforced.
New Services:
* Athena
* Config
* DataSync
* Step Functions
New methods:
* Athena:
* create_work_group()
* list_work_groups()
* API Gateway:
* delete_stage()
* update_api_key()
* CloudWatch Logs
* list_tags_log_group()
* tag_log_group()
* untag_log_group()
* Config
* batch_get_resource_config()
* delete_aggregation_authorization()
* delete_configuration_aggregator()
* describe_aggregation_authorizations()
* describe_configuration_aggregators()
* get_resource_config_history()
* list_aggregate_discovered_resources() (For S3)
* list_discovered_resources() (For S3)
* put_aggregation_authorization()
* put_configuration_aggregator()
* Cognito
* assume_role_with_web_identity()
* describe_identity_pool()
* get_open_id_token()
* update_user_pool_domain()
* DataSync:
* cancel_task_execution()
* create_location()
* create_task()
* start_task_execution()
* EC2:
* create_launch_template()
* create_launch_template_version()
* describe_launch_template_versions()
* describe_launch_templates()
* ECS
* decrypt()
* encrypt()
* generate_data_key_without_plaintext()
* generate_random()
* re_encrypt()
* Glue
* batch_get_partition()
* IAM
* create_open_id_connect_provider()
* create_virtual_mfa_device()
* delete_account_password_policy()
* delete_open_id_connect_provider()
* delete_policy()
* delete_virtual_mfa_device()
* get_account_password_policy()
* get_open_id_connect_provider()
* list_open_id_connect_providers()
* list_virtual_mfa_devices()
* update_account_password_policy()
* Lambda
* create_event_source_mapping()
* delete_event_source_mapping()
* get_event_source_mapping()
* list_event_source_mappings()
* update_configuration()
* update_event_source_mapping()
* update_function_code()
* KMS
* decrypt()
* encrypt()
* generate_data_key_without_plaintext()
* generate_random()
* re_encrypt()
* SES
* send_templated_email()
* SNS
* add_permission()
* list_tags_for_resource()
* remove_permission()
* tag_resource()
* untag_resource()
* SSM
* describe_parameters()
* get_parameter_history()
* Step Functions
* create_state_machine()
* delete_state_machine()
* describe_execution()
* describe_state_machine()
* describe_state_machine_for_execution()
* list_executions()
* list_state_machines()
* list_tags_for_resource()
* start_execution()
* stop_execution()
SQS
* list_queue_tags()
* send_message_batch()
General updates:
* API Gateway:
* Now generates valid IDs
* API Keys, Usage Plans now support tags
* ACM:
* list_certificates() accepts the status parameter
* Batch:
* submit_job() can now be called with job name
* CloudWatch Events
* Multi-region support
* CloudWatch Logs
* get_log_events() now supports pagination
* Cognito:
* Now throws UsernameExistsException for known users
* DynamoDB
* update_item() now supports lists, the list_append-operator and removing nested items
* delete_item() now supports condition expressions
* get_item() now supports projection expression
* Enforces 400KB item size
* Validation on duplicate keys in batch_get_item()
* Validation on AttributeDefinitions on create_table()
* Validation on Query Key Expression
* Projection Expressions now support nested attributes
* EC2:
* Change DesiredCapacity behaviour for AutoScaling groups
* Extend list of supported EC2 ENI properties
* Create ASG from Instance now supported
* ASG attached to a terminated instance now recreate the instance of required
* Unify OwnerIDs
* ECS
* Task definition revision deregistration: remaining revisions now remain unchanged
* Fix created_at/updated_at format for deployments
* Support multiple regions
* ELB
* Return correct response then describing target health of stopped instances
* Target groups now longer show terminated instances
* 'fixed-response' now a supported action-type
* Now supports redirect: authenticate-cognito
* Kinesis FireHose
* Now supports ExtendedS3DestinationConfiguration
* KMS
* Now supports tags
* Organizations
* create_organization() now creates Master account
* Redshift
* Fix timezone problems when creating a cluster
* Support for enhanced_vpc_routing-parameter
* Route53
* Implemented UPSERT for change_resource_records
* S3:
* Support partNumber for head_object
* Support for INTELLIGENT_TIERING, GLACIER and DEEP_ARCHIVE
* Fix KeyCount attribute
* list_objects now supports pagination (next_marker)
* Support tagging for versioned objects
* STS
* Implement validation on policy length
* Lambda
* Support EventSourceMappings for SQS, DynamoDB
* get_function(), delete_function() now both support ARNs as parameters
* IAM
* Roles now support tags
* Policy Validation: SID can be empty
* Validate roles have no attachments when deleting
* SecretsManager
* Now supports binary secrets
* IOT
* update_thing_shadow validation
* delete_thing now also removed principals
* SQS
* Tags supported for create_queue()
1.3.7 1.3.7
----- -----

View File

@ -171,7 +171,7 @@
- [ ] update_webhook - [ ] update_webhook
## apigateway ## apigateway
24% implemented 25% implemented
- [ ] create_api_key - [ ] create_api_key
- [ ] create_authorizer - [ ] create_authorizer
- [ ] create_base_path_mapping - [ ] create_base_path_mapping
@ -204,7 +204,7 @@
- [ ] delete_request_validator - [ ] delete_request_validator
- [X] delete_resource - [X] delete_resource
- [X] delete_rest_api - [X] delete_rest_api
- [ ] delete_stage - [X] delete_stage
- [X] delete_usage_plan - [X] delete_usage_plan
- [X] delete_usage_plan_key - [X] delete_usage_plan_key
- [ ] delete_vpc_link - [ ] delete_vpc_link
@ -687,12 +687,17 @@
## ce ## ce
0% implemented 0% implemented
- [ ] get_cost_and_usage - [ ] get_cost_and_usage
- [ ] get_cost_and_usage_with_resources
- [ ] get_cost_forecast - [ ] get_cost_forecast
- [ ] get_dimension_values - [ ] get_dimension_values
- [ ] get_reservation_coverage - [ ] get_reservation_coverage
- [ ] get_reservation_purchase_recommendation - [ ] get_reservation_purchase_recommendation
- [ ] get_reservation_utilization - [ ] get_reservation_utilization
- [ ] get_rightsizing_recommendation - [ ] get_rightsizing_recommendation
- [ ] get_savings_plans_coverage
- [ ] get_savings_plans_purchase_recommendation
- [ ] get_savings_plans_utilization
- [ ] get_savings_plans_utilization_details
- [ ] get_tags - [ ] get_tags
- [ ] get_usage_forecast - [ ] get_usage_forecast
@ -701,6 +706,7 @@
- [ ] associate_phone_number_with_user - [ ] associate_phone_number_with_user
- [ ] associate_phone_numbers_with_voice_connector - [ ] associate_phone_numbers_with_voice_connector
- [ ] associate_phone_numbers_with_voice_connector_group - [ ] associate_phone_numbers_with_voice_connector_group
- [ ] batch_create_room_membership
- [ ] batch_delete_phone_number - [ ] batch_delete_phone_number
- [ ] batch_suspend_user - [ ] batch_suspend_user
- [ ] batch_unsuspend_user - [ ] batch_unsuspend_user
@ -709,11 +715,15 @@
- [ ] create_account - [ ] create_account
- [ ] create_bot - [ ] create_bot
- [ ] create_phone_number_order - [ ] create_phone_number_order
- [ ] create_room
- [ ] create_room_membership
- [ ] create_voice_connector - [ ] create_voice_connector
- [ ] create_voice_connector_group - [ ] create_voice_connector_group
- [ ] delete_account - [ ] delete_account
- [ ] delete_events_configuration - [ ] delete_events_configuration
- [ ] delete_phone_number - [ ] delete_phone_number
- [ ] delete_room
- [ ] delete_room_membership
- [ ] delete_voice_connector - [ ] delete_voice_connector
- [ ] delete_voice_connector_group - [ ] delete_voice_connector_group
- [ ] delete_voice_connector_origination - [ ] delete_voice_connector_origination
@ -731,6 +741,7 @@
- [ ] get_phone_number - [ ] get_phone_number
- [ ] get_phone_number_order - [ ] get_phone_number_order
- [ ] get_phone_number_settings - [ ] get_phone_number_settings
- [ ] get_room
- [ ] get_user - [ ] get_user
- [ ] get_user_settings - [ ] get_user_settings
- [ ] get_voice_connector - [ ] get_voice_connector
@ -745,6 +756,8 @@
- [ ] list_bots - [ ] list_bots
- [ ] list_phone_number_orders - [ ] list_phone_number_orders
- [ ] list_phone_numbers - [ ] list_phone_numbers
- [ ] list_room_memberships
- [ ] list_rooms
- [ ] list_users - [ ] list_users
- [ ] list_voice_connector_groups - [ ] list_voice_connector_groups
- [ ] list_voice_connector_termination_credentials - [ ] list_voice_connector_termination_credentials
@ -766,6 +779,8 @@
- [ ] update_global_settings - [ ] update_global_settings
- [ ] update_phone_number - [ ] update_phone_number
- [ ] update_phone_number_settings - [ ] update_phone_number_settings
- [ ] update_room
- [ ] update_room_membership
- [ ] update_user - [ ] update_user
- [ ] update_user_settings - [ ] update_user_settings
- [ ] update_voice_connector - [ ] update_voice_connector
@ -1003,6 +1018,7 @@
- [ ] delete_suggester - [ ] delete_suggester
- [ ] describe_analysis_schemes - [ ] describe_analysis_schemes
- [ ] describe_availability_options - [ ] describe_availability_options
- [ ] describe_domain_endpoint_options
- [ ] describe_domains - [ ] describe_domains
- [ ] describe_expressions - [ ] describe_expressions
- [ ] describe_index_fields - [ ] describe_index_fields
@ -1012,6 +1028,7 @@
- [ ] index_documents - [ ] index_documents
- [ ] list_domain_names - [ ] list_domain_names
- [ ] update_availability_options - [ ] update_availability_options
- [ ] update_domain_endpoint_options
- [ ] update_scaling_parameters - [ ] update_scaling_parameters
- [ ] update_service_access_policies - [ ] update_service_access_policies
@ -1028,9 +1045,11 @@
- [ ] delete_trail - [ ] delete_trail
- [ ] describe_trails - [ ] describe_trails
- [ ] get_event_selectors - [ ] get_event_selectors
- [ ] get_trail
- [ ] get_trail_status - [ ] get_trail_status
- [ ] list_public_keys - [ ] list_public_keys
- [ ] list_tags - [ ] list_tags
- [ ] list_trails
- [ ] lookup_events - [ ] lookup_events
- [ ] put_event_selectors - [ ] put_event_selectors
- [ ] remove_tags - [ ] remove_tags
@ -1252,6 +1271,22 @@
- [ ] update_team_member - [ ] update_team_member
- [ ] update_user_profile - [ ] update_user_profile
## codestar-notifications
0% implemented
- [ ] create_notification_rule
- [ ] delete_notification_rule
- [ ] delete_target
- [ ] describe_notification_rule
- [ ] list_event_types
- [ ] list_notification_rules
- [ ] list_tags_for_resource
- [ ] list_targets
- [ ] subscribe
- [ ] tag_resource
- [ ] unsubscribe
- [ ] untag_resource
- [ ] update_notification_rule
## cognito-identity ## cognito-identity
28% implemented 28% implemented
- [X] create_identity_pool - [X] create_identity_pool
@ -1545,10 +1580,13 @@
- [ ] list_queues - [ ] list_queues
- [ ] list_routing_profiles - [ ] list_routing_profiles
- [ ] list_security_profiles - [ ] list_security_profiles
- [ ] list_tags_for_resource
- [ ] list_user_hierarchy_groups - [ ] list_user_hierarchy_groups
- [ ] list_users - [ ] list_users
- [ ] start_outbound_voice_contact - [ ] start_outbound_voice_contact
- [ ] stop_contact - [ ] stop_contact
- [ ] tag_resource
- [ ] untag_resource
- [ ] update_contact_attributes - [ ] update_contact_attributes
- [ ] update_user_hierarchy - [ ] update_user_hierarchy
- [ ] update_user_identity_info - [ ] update_user_identity_info
@ -1563,6 +1601,31 @@
- [ ] modify_report_definition - [ ] modify_report_definition
- [ ] put_report_definition - [ ] put_report_definition
## dataexchange
0% implemented
- [ ] cancel_job
- [ ] create_data_set
- [ ] create_job
- [ ] create_revision
- [ ] delete_asset
- [ ] delete_data_set
- [ ] delete_revision
- [ ] get_asset
- [ ] get_data_set
- [ ] get_job
- [ ] get_revision
- [ ] list_data_set_revisions
- [ ] list_data_sets
- [ ] list_jobs
- [ ] list_revision_assets
- [ ] list_tags_for_resource
- [ ] start_job
- [ ] tag_resource
- [ ] untag_resource
- [ ] update_asset
- [ ] update_data_set
- [ ] update_revision
## datapipeline ## datapipeline
42% implemented 42% implemented
- [X] activate_pipeline - [X] activate_pipeline
@ -1586,17 +1649,17 @@
- [ ] validate_pipeline_definition - [ ] validate_pipeline_definition
## datasync ## datasync
0% implemented 22% implemented
- [ ] cancel_task_execution - [X] cancel_task_execution
- [ ] create_agent - [ ] create_agent
- [ ] create_location_efs - [ ] create_location_efs
- [ ] create_location_nfs - [ ] create_location_nfs
- [ ] create_location_s3 - [ ] create_location_s3
- [ ] create_location_smb - [ ] create_location_smb
- [ ] create_task - [X] create_task
- [ ] delete_agent - [ ] delete_agent
- [ ] delete_location - [X] delete_location
- [ ] delete_task - [X] delete_task
- [ ] describe_agent - [ ] describe_agent
- [ ] describe_location_efs - [ ] describe_location_efs
- [ ] describe_location_nfs - [ ] describe_location_nfs
@ -1609,11 +1672,11 @@
- [ ] list_tags_for_resource - [ ] list_tags_for_resource
- [ ] list_task_executions - [ ] list_task_executions
- [ ] list_tasks - [ ] list_tasks
- [ ] start_task_execution - [X] start_task_execution
- [ ] tag_resource - [ ] tag_resource
- [ ] untag_resource - [ ] untag_resource
- [ ] update_agent - [ ] update_agent
- [ ] update_task - [X] update_task
## dax ## dax
0% implemented 0% implemented
@ -1799,6 +1862,9 @@
- [ ] delete_lifecycle_policy - [ ] delete_lifecycle_policy
- [ ] get_lifecycle_policies - [ ] get_lifecycle_policies
- [ ] get_lifecycle_policy - [ ] get_lifecycle_policy
- [ ] list_tags_for_resource
- [ ] tag_resource
- [ ] untag_resource
- [ ] update_lifecycle_policy - [ ] update_lifecycle_policy
## dms ## dms
@ -2217,8 +2283,8 @@
- [X] describe_volumes - [X] describe_volumes
- [ ] describe_volumes_modifications - [ ] describe_volumes_modifications
- [X] describe_vpc_attribute - [X] describe_vpc_attribute
- [ ] describe_vpc_classic_link - [X] describe_vpc_classic_link
- [ ] describe_vpc_classic_link_dns_support - [X] describe_vpc_classic_link_dns_support
- [ ] describe_vpc_endpoint_connection_notifications - [ ] describe_vpc_endpoint_connection_notifications
- [ ] describe_vpc_endpoint_connections - [ ] describe_vpc_endpoint_connections
- [ ] describe_vpc_endpoint_service_configurations - [ ] describe_vpc_endpoint_service_configurations
@ -2237,8 +2303,8 @@
- [ ] disable_ebs_encryption_by_default - [ ] disable_ebs_encryption_by_default
- [ ] disable_transit_gateway_route_table_propagation - [ ] disable_transit_gateway_route_table_propagation
- [ ] disable_vgw_route_propagation - [ ] disable_vgw_route_propagation
- [ ] disable_vpc_classic_link - [X] disable_vpc_classic_link
- [ ] disable_vpc_classic_link_dns_support - [X] disable_vpc_classic_link_dns_support
- [X] disassociate_address - [X] disassociate_address
- [ ] disassociate_client_vpn_target_network - [ ] disassociate_client_vpn_target_network
- [ ] disassociate_iam_instance_profile - [ ] disassociate_iam_instance_profile
@ -2250,8 +2316,8 @@
- [ ] enable_transit_gateway_route_table_propagation - [ ] enable_transit_gateway_route_table_propagation
- [ ] enable_vgw_route_propagation - [ ] enable_vgw_route_propagation
- [ ] enable_volume_io - [ ] enable_volume_io
- [ ] enable_vpc_classic_link - [X] enable_vpc_classic_link
- [ ] enable_vpc_classic_link_dns_support - [X] enable_vpc_classic_link_dns_support
- [ ] export_client_vpn_client_certificate_revocation_list - [ ] export_client_vpn_client_certificate_revocation_list
- [ ] export_client_vpn_client_configuration - [ ] export_client_vpn_client_configuration
- [ ] export_image - [ ] export_image
@ -2461,16 +2527,22 @@
## eks ## eks
0% implemented 0% implemented
- [ ] create_cluster - [ ] create_cluster
- [ ] create_nodegroup
- [ ] delete_cluster - [ ] delete_cluster
- [ ] delete_nodegroup
- [ ] describe_cluster - [ ] describe_cluster
- [ ] describe_nodegroup
- [ ] describe_update - [ ] describe_update
- [ ] list_clusters - [ ] list_clusters
- [ ] list_nodegroups
- [ ] list_tags_for_resource - [ ] list_tags_for_resource
- [ ] list_updates - [ ] list_updates
- [ ] tag_resource - [ ] tag_resource
- [ ] untag_resource - [ ] untag_resource
- [ ] update_cluster_config - [ ] update_cluster_config
- [ ] update_cluster_version - [ ] update_cluster_version
- [ ] update_nodegroup_config
- [ ] update_nodegroup_version
## elasticache ## elasticache
0% implemented 0% implemented
@ -2718,12 +2790,12 @@
- [ ] upgrade_elasticsearch_domain - [ ] upgrade_elasticsearch_domain
## events ## events
48% implemented 58% implemented
- [ ] activate_event_source - [ ] activate_event_source
- [ ] create_event_bus - [X] create_event_bus
- [ ] create_partner_event_source - [ ] create_partner_event_source
- [ ] deactivate_event_source - [ ] deactivate_event_source
- [ ] delete_event_bus - [X] delete_event_bus
- [ ] delete_partner_event_source - [ ] delete_partner_event_source
- [X] delete_rule - [X] delete_rule
- [X] describe_event_bus - [X] describe_event_bus
@ -2732,7 +2804,7 @@
- [X] describe_rule - [X] describe_rule
- [X] disable_rule - [X] disable_rule
- [X] enable_rule - [X] enable_rule
- [ ] list_event_buses - [X] list_event_buses
- [ ] list_event_sources - [ ] list_event_sources
- [ ] list_partner_event_source_accounts - [ ] list_partner_event_source_accounts
- [ ] list_partner_event_sources - [ ] list_partner_event_sources
@ -3217,6 +3289,7 @@
- [ ] create_filter - [ ] create_filter
- [ ] create_ip_set - [ ] create_ip_set
- [ ] create_members - [ ] create_members
- [ ] create_publishing_destination
- [ ] create_sample_findings - [ ] create_sample_findings
- [ ] create_threat_intel_set - [ ] create_threat_intel_set
- [ ] decline_invitations - [ ] decline_invitations
@ -3225,7 +3298,9 @@
- [ ] delete_invitations - [ ] delete_invitations
- [ ] delete_ip_set - [ ] delete_ip_set
- [ ] delete_members - [ ] delete_members
- [ ] delete_publishing_destination
- [ ] delete_threat_intel_set - [ ] delete_threat_intel_set
- [ ] describe_publishing_destination
- [ ] disassociate_from_master_account - [ ] disassociate_from_master_account
- [ ] disassociate_members - [ ] disassociate_members
- [ ] get_detector - [ ] get_detector
@ -3244,6 +3319,7 @@
- [ ] list_invitations - [ ] list_invitations
- [ ] list_ip_sets - [ ] list_ip_sets
- [ ] list_members - [ ] list_members
- [ ] list_publishing_destinations
- [ ] list_tags_for_resource - [ ] list_tags_for_resource
- [ ] list_threat_intel_sets - [ ] list_threat_intel_sets
- [ ] start_monitoring_members - [ ] start_monitoring_members
@ -3255,6 +3331,7 @@
- [ ] update_filter - [ ] update_filter
- [ ] update_findings_feedback - [ ] update_findings_feedback
- [ ] update_ip_set - [ ] update_ip_set
- [ ] update_publishing_destination
- [ ] update_threat_intel_set - [ ] update_threat_intel_set
## health ## health
@ -3267,7 +3344,7 @@
- [ ] describe_events - [ ] describe_events
## iam ## iam
62% implemented 65% implemented
- [ ] add_client_id_to_open_id_connect_provider - [ ] add_client_id_to_open_id_connect_provider
- [X] add_role_to_instance_profile - [X] add_role_to_instance_profile
- [X] add_user_to_group - [X] add_user_to_group
@ -3293,7 +3370,7 @@
- [X] delete_access_key - [X] delete_access_key
- [X] delete_account_alias - [X] delete_account_alias
- [X] delete_account_password_policy - [X] delete_account_password_policy
- [ ] delete_group - [X] delete_group
- [ ] delete_group_policy - [ ] delete_group_policy
- [ ] delete_instance_profile - [ ] delete_instance_profile
- [X] delete_login_profile - [X] delete_login_profile
@ -3323,7 +3400,7 @@
- [X] get_access_key_last_used - [X] get_access_key_last_used
- [X] get_account_authorization_details - [X] get_account_authorization_details
- [X] get_account_password_policy - [X] get_account_password_policy
- [ ] get_account_summary - [X] get_account_summary
- [ ] get_context_keys_for_custom_policy - [ ] get_context_keys_for_custom_policy
- [ ] get_context_keys_for_principal_policy - [ ] get_context_keys_for_principal_policy
- [X] get_credential_report - [X] get_credential_report
@ -3405,7 +3482,7 @@
- [X] update_signing_certificate - [X] update_signing_certificate
- [ ] update_ssh_public_key - [ ] update_ssh_public_key
- [X] update_user - [X] update_user
- [ ] upload_server_certificate - [X] upload_server_certificate
- [X] upload_signing_certificate - [X] upload_signing_certificate
- [ ] upload_ssh_public_key - [ ] upload_ssh_public_key
@ -3459,7 +3536,7 @@
- [ ] update_assessment_target - [ ] update_assessment_target
## iot ## iot
23% implemented 22% implemented
- [ ] accept_certificate_transfer - [ ] accept_certificate_transfer
- [ ] add_thing_to_billing_group - [ ] add_thing_to_billing_group
- [X] add_thing_to_thing_group - [X] add_thing_to_thing_group
@ -3544,11 +3621,13 @@
- [X] detach_thing_principal - [X] detach_thing_principal
- [ ] disable_topic_rule - [ ] disable_topic_rule
- [ ] enable_topic_rule - [ ] enable_topic_rule
- [ ] get_cardinality
- [ ] get_effective_policies - [ ] get_effective_policies
- [ ] get_indexing_configuration - [ ] get_indexing_configuration
- [ ] get_job_document - [ ] get_job_document
- [ ] get_logging_options - [ ] get_logging_options
- [ ] get_ota_update - [ ] get_ota_update
- [ ] get_percentiles
- [X] get_policy - [X] get_policy
- [ ] get_policy_version - [ ] get_policy_version
- [ ] get_registration_code - [ ] get_registration_code
@ -3977,46 +4056,46 @@
- [ ] update_resource - [ ] update_resource
## lambda ## lambda
0% implemented 41% implemented
- [ ] add_layer_version_permission - [ ] add_layer_version_permission
- [ ] add_permission - [ ] add_permission
- [ ] create_alias - [ ] create_alias
- [ ] create_event_source_mapping - [X] create_event_source_mapping
- [ ] create_function - [X] create_function
- [ ] delete_alias - [ ] delete_alias
- [ ] delete_event_source_mapping - [X] delete_event_source_mapping
- [ ] delete_function - [X] delete_function
- [ ] delete_function_concurrency - [ ] delete_function_concurrency
- [ ] delete_layer_version - [ ] delete_layer_version
- [ ] get_account_settings - [ ] get_account_settings
- [ ] get_alias - [ ] get_alias
- [ ] get_event_source_mapping - [X] get_event_source_mapping
- [ ] get_function - [X] get_function
- [ ] get_function_configuration - [ ] get_function_configuration
- [ ] get_layer_version - [ ] get_layer_version
- [ ] get_layer_version_by_arn - [ ] get_layer_version_by_arn
- [ ] get_layer_version_policy - [ ] get_layer_version_policy
- [ ] get_policy - [ ] get_policy
- [ ] invoke - [X] invoke
- [ ] invoke_async - [ ] invoke_async
- [ ] list_aliases - [ ] list_aliases
- [ ] list_event_source_mappings - [X] list_event_source_mappings
- [ ] list_functions - [X] list_functions
- [ ] list_layer_versions - [ ] list_layer_versions
- [ ] list_layers - [ ] list_layers
- [ ] list_tags - [X] list_tags
- [ ] list_versions_by_function - [X] list_versions_by_function
- [ ] publish_layer_version - [ ] publish_layer_version
- [ ] publish_version - [ ] publish_version
- [ ] put_function_concurrency - [ ] put_function_concurrency
- [ ] remove_layer_version_permission - [ ] remove_layer_version_permission
- [ ] remove_permission - [ ] remove_permission
- [ ] tag_resource - [X] tag_resource
- [ ] untag_resource - [X] untag_resource
- [ ] update_alias - [ ] update_alias
- [ ] update_event_source_mapping - [X] update_event_source_mapping
- [ ] update_function_code - [X] update_function_code
- [ ] update_function_configuration - [X] update_function_configuration
## lex-models ## lex-models
0% implemented 0% implemented
@ -4295,6 +4374,15 @@
- [ ] reject_invitation - [ ] reject_invitation
- [ ] vote_on_proposal - [ ] vote_on_proposal
## marketplace-catalog
0% implemented
- [ ] cancel_change_set
- [ ] describe_change_set
- [ ] describe_entity
- [ ] list_change_sets
- [ ] list_entities
- [ ] start_change_set
## marketplace-entitlement ## marketplace-entitlement
0% implemented 0% implemented
- [ ] get_entitlements - [ ] get_entitlements
@ -4723,7 +4811,7 @@
- [ ] update_server_engine_attributes - [ ] update_server_engine_attributes
## organizations ## organizations
41% implemented 43% implemented
- [ ] accept_handshake - [ ] accept_handshake
- [X] attach_policy - [X] attach_policy
- [ ] cancel_handshake - [ ] cancel_handshake
@ -4737,7 +4825,7 @@
- [ ] delete_organizational_unit - [ ] delete_organizational_unit
- [ ] delete_policy - [ ] delete_policy
- [X] describe_account - [X] describe_account
- [ ] describe_create_account_status - [X] describe_create_account_status
- [ ] describe_handshake - [ ] describe_handshake
- [X] describe_organization - [X] describe_organization
- [X] describe_organizational_unit - [X] describe_organizational_unit
@ -4773,6 +4861,7 @@
## personalize ## personalize
0% implemented 0% implemented
- [ ] create_batch_inference_job
- [ ] create_campaign - [ ] create_campaign
- [ ] create_dataset - [ ] create_dataset
- [ ] create_dataset_group - [ ] create_dataset_group
@ -4788,6 +4877,7 @@
- [ ] delete_schema - [ ] delete_schema
- [ ] delete_solution - [ ] delete_solution
- [ ] describe_algorithm - [ ] describe_algorithm
- [ ] describe_batch_inference_job
- [ ] describe_campaign - [ ] describe_campaign
- [ ] describe_dataset - [ ] describe_dataset
- [ ] describe_dataset_group - [ ] describe_dataset_group
@ -4799,6 +4889,7 @@
- [ ] describe_solution - [ ] describe_solution
- [ ] describe_solution_version - [ ] describe_solution_version
- [ ] get_solution_metrics - [ ] get_solution_metrics
- [ ] list_batch_inference_jobs
- [ ] list_campaigns - [ ] list_campaigns
- [ ] list_dataset_groups - [ ] list_dataset_groups
- [ ] list_dataset_import_jobs - [ ] list_dataset_import_jobs
@ -4831,6 +4922,7 @@
- [ ] create_email_template - [ ] create_email_template
- [ ] create_export_job - [ ] create_export_job
- [ ] create_import_job - [ ] create_import_job
- [ ] create_journey
- [ ] create_push_template - [ ] create_push_template
- [ ] create_segment - [ ] create_segment
- [ ] create_sms_template - [ ] create_sms_template
@ -4847,6 +4939,7 @@
- [ ] delete_endpoint - [ ] delete_endpoint
- [ ] delete_event_stream - [ ] delete_event_stream
- [ ] delete_gcm_channel - [ ] delete_gcm_channel
- [ ] delete_journey
- [ ] delete_push_template - [ ] delete_push_template
- [ ] delete_segment - [ ] delete_segment
- [ ] delete_sms_channel - [ ] delete_sms_channel
@ -4879,6 +4972,10 @@
- [ ] get_gcm_channel - [ ] get_gcm_channel
- [ ] get_import_job - [ ] get_import_job
- [ ] get_import_jobs - [ ] get_import_jobs
- [ ] get_journey
- [ ] get_journey_date_range_kpi
- [ ] get_journey_execution_activity_metrics
- [ ] get_journey_execution_metrics
- [ ] get_push_template - [ ] get_push_template
- [ ] get_segment - [ ] get_segment
- [ ] get_segment_export_jobs - [ ] get_segment_export_jobs
@ -4890,6 +4987,7 @@
- [ ] get_sms_template - [ ] get_sms_template
- [ ] get_user_endpoints - [ ] get_user_endpoints
- [ ] get_voice_channel - [ ] get_voice_channel
- [ ] list_journeys
- [ ] list_tags_for_resource - [ ] list_tags_for_resource
- [ ] list_templates - [ ] list_templates
- [ ] phone_number_validate - [ ] phone_number_validate
@ -4913,6 +5011,8 @@
- [ ] update_endpoint - [ ] update_endpoint
- [ ] update_endpoints_batch - [ ] update_endpoints_batch
- [ ] update_gcm_channel - [ ] update_gcm_channel
- [ ] update_journey
- [ ] update_journey_state
- [ ] update_push_template - [ ] update_push_template
- [ ] update_segment - [ ] update_segment
- [ ] update_sms_channel - [ ] update_sms_channel
@ -5661,6 +5761,17 @@
0% implemented 0% implemented
- [ ] invoke_endpoint - [ ] invoke_endpoint
## savingsplans
0% implemented
- [ ] create_savings_plan
- [ ] describe_savings_plan_rates
- [ ] describe_savings_plans
- [ ] describe_savings_plans_offering_rates
- [ ] describe_savings_plans_offerings
- [ ] list_tags_for_resource
- [ ] tag_resource
- [ ] untag_resource
## sdb ## sdb
0% implemented 0% implemented
- [ ] batch_delete_attributes - [ ] batch_delete_attributes
@ -5954,6 +6065,51 @@
- [X] verify_email_address - [X] verify_email_address
- [X] verify_email_identity - [X] verify_email_identity
## sesv2
0% implemented
- [ ] create_configuration_set
- [ ] create_configuration_set_event_destination
- [ ] create_dedicated_ip_pool
- [ ] create_deliverability_test_report
- [ ] create_email_identity
- [ ] delete_configuration_set
- [ ] delete_configuration_set_event_destination
- [ ] delete_dedicated_ip_pool
- [ ] delete_email_identity
- [ ] get_account
- [ ] get_blacklist_reports
- [ ] get_configuration_set
- [ ] get_configuration_set_event_destinations
- [ ] get_dedicated_ip
- [ ] get_dedicated_ips
- [ ] get_deliverability_dashboard_options
- [ ] get_deliverability_test_report
- [ ] get_domain_deliverability_campaign
- [ ] get_domain_statistics_report
- [ ] get_email_identity
- [ ] list_configuration_sets
- [ ] list_dedicated_ip_pools
- [ ] list_deliverability_test_reports
- [ ] list_domain_deliverability_campaigns
- [ ] list_email_identities
- [ ] list_tags_for_resource
- [ ] put_account_dedicated_ip_warmup_attributes
- [ ] put_account_sending_attributes
- [ ] put_configuration_set_delivery_options
- [ ] put_configuration_set_reputation_options
- [ ] put_configuration_set_sending_options
- [ ] put_configuration_set_tracking_options
- [ ] put_dedicated_ip_in_pool
- [ ] put_dedicated_ip_warmup_attributes
- [ ] put_deliverability_dashboard_option
- [ ] put_email_identity_dkim_attributes
- [ ] put_email_identity_feedback_attributes
- [ ] put_email_identity_mail_from_attributes
- [ ] send_email
- [ ] tag_resource
- [ ] untag_resource
- [ ] update_configuration_set_event_destination
## shield ## shield
0% implemented 0% implemented
- [ ] associate_drt_log_bucket - [ ] associate_drt_log_bucket
@ -5984,8 +6140,11 @@
- [ ] list_signing_jobs - [ ] list_signing_jobs
- [ ] list_signing_platforms - [ ] list_signing_platforms
- [ ] list_signing_profiles - [ ] list_signing_profiles
- [ ] list_tags_for_resource
- [ ] put_signing_profile - [ ] put_signing_profile
- [ ] start_signing_job - [ ] start_signing_job
- [ ] tag_resource
- [ ] untag_resource
## sms ## sms
0% implemented 0% implemented
@ -6111,7 +6270,7 @@
- [X] untag_queue - [X] untag_queue
## ssm ## ssm
10% implemented 11% implemented
- [X] add_tags_to_resource - [X] add_tags_to_resource
- [ ] cancel_command - [ ] cancel_command
- [ ] cancel_maintenance_window_execution - [ ] cancel_maintenance_window_execution
@ -6184,7 +6343,7 @@
- [ ] get_ops_item - [ ] get_ops_item
- [ ] get_ops_summary - [ ] get_ops_summary
- [X] get_parameter - [X] get_parameter
- [ ] get_parameter_history - [X] get_parameter_history
- [X] get_parameters - [X] get_parameters
- [X] get_parameters_by_path - [X] get_parameters_by_path
- [ ] get_patch_baseline - [ ] get_patch_baseline
@ -6233,6 +6392,19 @@
- [ ] update_patch_baseline - [ ] update_patch_baseline
- [ ] update_service_setting - [ ] update_service_setting
## sso
0% implemented
- [ ] get_role_credentials
- [ ] list_account_roles
- [ ] list_accounts
- [ ] logout
## sso-oidc
0% implemented
- [ ] create_token
- [ ] register_client
- [ ] start_device_authorization
## stepfunctions ## stepfunctions
36% implemented 36% implemented
- [ ] create_activity - [ ] create_activity
@ -6742,6 +6914,7 @@
- [ ] delete_ip_group - [ ] delete_ip_group
- [ ] delete_tags - [ ] delete_tags
- [ ] delete_workspace_image - [ ] delete_workspace_image
- [ ] deregister_workspace_directory
- [ ] describe_account - [ ] describe_account
- [ ] describe_account_modifications - [ ] describe_account_modifications
- [ ] describe_client_properties - [ ] describe_client_properties
@ -6758,10 +6931,14 @@
- [ ] list_available_management_cidr_ranges - [ ] list_available_management_cidr_ranges
- [ ] modify_account - [ ] modify_account
- [ ] modify_client_properties - [ ] modify_client_properties
- [ ] modify_selfservice_permissions
- [ ] modify_workspace_access_properties
- [ ] modify_workspace_creation_properties
- [ ] modify_workspace_properties - [ ] modify_workspace_properties
- [ ] modify_workspace_state - [ ] modify_workspace_state
- [ ] reboot_workspaces - [ ] reboot_workspaces
- [ ] rebuild_workspaces - [ ] rebuild_workspaces
- [ ] register_workspace_directory
- [ ] restore_workspace - [ ] restore_workspace
- [ ] revoke_ip_rules - [ ] revoke_ip_rules
- [ ] start_workspaces - [ ] start_workspaces

View File

@ -31,7 +31,8 @@ aws_managed_policies:
scripts/update_managed_policies.py scripts/update_managed_policies.py
upload_pypi_artifact: upload_pypi_artifact:
python setup.py sdist bdist_wheel upload python setup.py sdist bdist_wheel
twine upload dist/*
push_dockerhub_image: push_dockerhub_image:
docker build -t motoserver/moto . docker build -t motoserver/moto .

View File

@ -58,7 +58,7 @@ from .xray import XRaySegment, mock_xray, mock_xray_client # noqa
# logging.getLogger('boto').setLevel(logging.CRITICAL) # logging.getLogger('boto').setLevel(logging.CRITICAL)
__title__ = "moto" __title__ = "moto"
__version__ = "1.3.14.dev" __version__ = "1.3.15.dev"
try: try:

View File

@ -53,8 +53,9 @@ try:
except ImportError: except ImportError:
from backports.tempfile import TemporaryDirectory from backports.tempfile import TemporaryDirectory
# The lambci container is returning a special escape character for the "RequestID" fields. Unicode 033:
_stderr_regex = re.compile(r"START|END|REPORT RequestId: .*") # _stderr_regex = re.compile(r"START|END|REPORT RequestId: .*")
_stderr_regex = re.compile(r"\033\[\d+.*")
_orig_adapter_send = requests.adapters.HTTPAdapter.send _orig_adapter_send = requests.adapters.HTTPAdapter.send
docker_3 = docker.__version__[0] >= "3" docker_3 = docker.__version__[0] >= "3"
@ -450,7 +451,7 @@ class LambdaFunction(BaseModel):
if exit_code != 0: if exit_code != 0:
raise Exception("lambda invoke failed output: {}".format(output)) raise Exception("lambda invoke failed output: {}".format(output))
# strip out RequestId lines # strip out RequestId lines (TODO: This will return an additional '\n' in the response)
output = os.linesep.join( output = os.linesep.join(
[ [
line line
@ -998,6 +999,32 @@ class LambdaBackend(BaseBackend):
def add_policy(self, function_name, policy): def add_policy(self, function_name, policy):
self.get_function(function_name).policy = policy self.get_function(function_name).policy = policy
def update_function_code(self, function_name, qualifier, body):
fn = self.get_function(function_name, qualifier)
if fn:
if body.get("Publish", False):
fn = self.publish_function(function_name)
config = fn.update_function_code(body)
return config
else:
return None
def update_function_configuration(self, function_name, qualifier, body):
fn = self.get_function(function_name, qualifier)
return fn.update_configuration(body) if fn else None
def invoke(self, function_name, qualifier, body, headers, response_headers):
fn = self.get_function(function_name, qualifier)
if fn:
payload = fn.invoke(body, headers, response_headers)
response_headers["Content-Length"] = str(len(payload))
return response_headers, payload
else:
return response_headers, None
def do_validate_s3(): def do_validate_s3():
return os.environ.get("VALIDATE_LAMBDA_S3", "") in ["", "1", "true"] return os.environ.get("VALIDATE_LAMBDA_S3", "") in ["", "1", "true"]

View File

@ -168,10 +168,10 @@ class LambdaResponse(BaseResponse):
function_name = self.path.rsplit("/", 2)[-2] function_name = self.path.rsplit("/", 2)[-2]
qualifier = self._get_param("qualifier") qualifier = self._get_param("qualifier")
fn = self.lambda_backend.get_function(function_name, qualifier) response_header, payload = self.lambda_backend.invoke(
if fn: function_name, qualifier, self.body, self.headers, response_headers
payload = fn.invoke(self.body, self.headers, response_headers) )
response_headers["Content-Length"] = str(len(payload)) if payload:
return 202, response_headers, payload return 202, response_headers, payload
else: else:
return 404, response_headers, "{}" return 404, response_headers, "{}"
@ -321,26 +321,23 @@ class LambdaResponse(BaseResponse):
def _put_configuration(self, request): def _put_configuration(self, request):
function_name = self.path.rsplit("/", 2)[-2] function_name = self.path.rsplit("/", 2)[-2]
qualifier = self._get_param("Qualifier", None) qualifier = self._get_param("Qualifier", None)
resp = self.lambda_backend.update_function_configuration(
function_name, qualifier, body=self.json_body
)
fn = self.lambda_backend.get_function(function_name, qualifier) if resp:
return 200, {}, json.dumps(resp)
if fn:
config = fn.update_configuration(self.json_body)
return 200, {}, json.dumps(config)
else: else:
return 404, {}, "{}" return 404, {}, "{}"
def _put_code(self): def _put_code(self):
function_name = self.path.rsplit("/", 2)[-2] function_name = self.path.rsplit("/", 2)[-2]
qualifier = self._get_param("Qualifier", None) qualifier = self._get_param("Qualifier", None)
resp = self.lambda_backend.update_function_code(
function_name, qualifier, body=self.json_body
)
fn = self.lambda_backend.get_function(function_name, qualifier) if resp:
return 200, {}, json.dumps(resp)
if fn:
if self.json_body.get("Publish", False):
fn = self.lambda_backend.publish_function(function_name)
config = fn.update_function_code(self.json_body)
return 200, {}, json.dumps(config)
else: else:
return 404, {}, "{}" return 404, {}, "{}"

View File

@ -624,7 +624,7 @@ class BatchBackend(BaseBackend):
def get_job_definition(self, identifier): def get_job_definition(self, identifier):
""" """
Get job defintiion by name or ARN Get job definitions by name or ARN
:param identifier: Name or ARN :param identifier: Name or ARN
:type identifier: str :type identifier: str
@ -643,7 +643,7 @@ class BatchBackend(BaseBackend):
def get_job_definitions(self, identifier): def get_job_definitions(self, identifier):
""" """
Get job defintiion by name or ARN Get job definitions by name or ARN
:param identifier: Name or ARN :param identifier: Name or ARN
:type identifier: str :type identifier: str
@ -934,7 +934,7 @@ class BatchBackend(BaseBackend):
self.ecs_backend.delete_cluster(compute_env.ecs_name) self.ecs_backend.delete_cluster(compute_env.ecs_name)
if compute_env.env_type == "MANAGED": if compute_env.env_type == "MANAGED":
# Delete compute envrionment # Delete compute environment
instance_ids = [instance.id for instance in compute_env.instances] instance_ids = [instance.id for instance in compute_env.instances]
self.ec2_backend.terminate_instances(instance_ids) self.ec2_backend.terminate_instances(instance_ids)
@ -1195,7 +1195,7 @@ class BatchBackend(BaseBackend):
depends_on=None, depends_on=None,
container_overrides=None, container_overrides=None,
): ):
# TODO parameters, retries (which is a dict raw from request), job dependancies and container overrides are ignored for now # TODO parameters, retries (which is a dict raw from request), job dependencies and container overrides are ignored for now
# Look for job definition # Look for job definition
job_def = self.get_job_definition(job_def_id) job_def = self.get_job_definition(job_def_id)

View File

@ -27,12 +27,14 @@ class Task(BaseModel):
name, name,
region_name, region_name,
arn_counter=0, arn_counter=0,
metadata=None,
): ):
self.source_location_arn = source_location_arn self.source_location_arn = source_location_arn
self.destination_location_arn = destination_location_arn self.destination_location_arn = destination_location_arn
self.name = name
self.metadata = metadata
# For simplicity Tasks are either available or running # For simplicity Tasks are either available or running
self.status = "AVAILABLE" self.status = "AVAILABLE"
self.name = name
self.current_task_execution_arn = None self.current_task_execution_arn = None
# Generate ARN # Generate ARN
self.arn = "arn:aws:datasync:{0}:111222333444:task/task-{1}".format( self.arn = "arn:aws:datasync:{0}:111222333444:task/task-{1}".format(
@ -129,7 +131,27 @@ class DataSyncBackend(BaseBackend):
self.locations[location.arn] = location self.locations[location.arn] = location
return location.arn return location.arn
def create_task(self, source_location_arn, destination_location_arn, name): def _get_location(self, location_arn, typ):
if location_arn not in self.locations:
raise InvalidRequestException(
"Location {0} is not found.".format(location_arn)
)
location = self.locations[location_arn]
if location.typ != typ:
raise InvalidRequestException(
"Invalid Location type: {0}".format(location.typ)
)
return location
def delete_location(self, location_arn):
if location_arn in self.locations:
del self.locations[location_arn]
else:
raise InvalidRequestException
def create_task(
self, source_location_arn, destination_location_arn, name, metadata=None
):
if source_location_arn not in self.locations: if source_location_arn not in self.locations:
raise InvalidRequestException( raise InvalidRequestException(
"Location {0} not found.".format(source_location_arn) "Location {0} not found.".format(source_location_arn)
@ -145,10 +167,33 @@ class DataSyncBackend(BaseBackend):
name, name,
region_name=self.region_name, region_name=self.region_name,
arn_counter=self.arn_counter, arn_counter=self.arn_counter,
metadata=metadata,
) )
self.tasks[task.arn] = task self.tasks[task.arn] = task
return task.arn return task.arn
def _get_task(self, task_arn):
if task_arn in self.tasks:
return self.tasks[task_arn]
else:
raise InvalidRequestException
def update_task(self, task_arn, name, metadata):
if task_arn in self.tasks:
task = self.tasks[task_arn]
task.name = name
task.metadata = metadata
else:
raise InvalidRequestException(
"Sync task {0} is not found.".format(task_arn)
)
def delete_task(self, task_arn):
if task_arn in self.tasks:
del self.tasks[task_arn]
else:
raise InvalidRequestException
def start_task_execution(self, task_arn): def start_task_execution(self, task_arn):
self.arn_counter = self.arn_counter + 1 self.arn_counter = self.arn_counter + 1
if task_arn in self.tasks: if task_arn in self.tasks:
@ -161,12 +206,19 @@ class DataSyncBackend(BaseBackend):
return task_execution.arn return task_execution.arn
raise InvalidRequestException("Invalid request.") raise InvalidRequestException("Invalid request.")
def _get_task_execution(self, task_execution_arn):
if task_execution_arn in self.task_executions:
return self.task_executions[task_execution_arn]
else:
raise InvalidRequestException
def cancel_task_execution(self, task_execution_arn): def cancel_task_execution(self, task_execution_arn):
if task_execution_arn in self.task_executions: if task_execution_arn in self.task_executions:
task_execution = self.task_executions[task_execution_arn] task_execution = self.task_executions[task_execution_arn]
task_execution.cancel() task_execution.cancel()
task_arn = task_execution.task_arn task_arn = task_execution.task_arn
self.tasks[task_arn].current_task_execution_arn = None self.tasks[task_arn].current_task_execution_arn = None
self.tasks[task_arn].status = "AVAILABLE"
return return
raise InvalidRequestException( raise InvalidRequestException(
"Sync task {0} is not found.".format(task_execution_arn) "Sync task {0} is not found.".format(task_execution_arn)

View File

@ -2,7 +2,6 @@ import json
from moto.core.responses import BaseResponse from moto.core.responses import BaseResponse
from .exceptions import InvalidRequestException
from .models import datasync_backends from .models import datasync_backends
@ -18,17 +17,7 @@ class DataSyncResponse(BaseResponse):
return json.dumps({"Locations": locations}) return json.dumps({"Locations": locations})
def _get_location(self, location_arn, typ): def _get_location(self, location_arn, typ):
location_arn = self._get_param("LocationArn") return self.datasync_backend._get_location(location_arn, typ)
if location_arn not in self.datasync_backend.locations:
raise InvalidRequestException(
"Location {0} is not found.".format(location_arn)
)
location = self.datasync_backend.locations[location_arn]
if location.typ != typ:
raise InvalidRequestException(
"Invalid Location type: {0}".format(location.typ)
)
return location
def create_location_s3(self): def create_location_s3(self):
# s3://bucket_name/folder/ # s3://bucket_name/folder/
@ -86,16 +75,40 @@ class DataSyncResponse(BaseResponse):
} }
) )
def delete_location(self):
location_arn = self._get_param("LocationArn")
self.datasync_backend.delete_location(location_arn)
return json.dumps({})
def create_task(self): def create_task(self):
destination_location_arn = self._get_param("DestinationLocationArn") destination_location_arn = self._get_param("DestinationLocationArn")
source_location_arn = self._get_param("SourceLocationArn") source_location_arn = self._get_param("SourceLocationArn")
name = self._get_param("Name") name = self._get_param("Name")
metadata = {
"CloudWatchLogGroupArn": self._get_param("CloudWatchLogGroupArn"),
"Options": self._get_param("Options"),
"Excludes": self._get_param("Excludes"),
"Tags": self._get_param("Tags"),
}
arn = self.datasync_backend.create_task( arn = self.datasync_backend.create_task(
source_location_arn, destination_location_arn, name source_location_arn, destination_location_arn, name, metadata=metadata
) )
return json.dumps({"TaskArn": arn}) return json.dumps({"TaskArn": arn})
def update_task(self):
task_arn = self._get_param("TaskArn")
self.datasync_backend.update_task(
task_arn,
name=self._get_param("Name"),
metadata={
"CloudWatchLogGroupArn": self._get_param("CloudWatchLogGroupArn"),
"Options": self._get_param("Options"),
"Excludes": self._get_param("Excludes"),
"Tags": self._get_param("Tags"),
},
)
return json.dumps({})
def list_tasks(self): def list_tasks(self):
tasks = list() tasks = list()
for arn, task in self.datasync_backend.tasks.items(): for arn, task in self.datasync_backend.tasks.items():
@ -104,29 +117,32 @@ class DataSyncResponse(BaseResponse):
) )
return json.dumps({"Tasks": tasks}) return json.dumps({"Tasks": tasks})
def delete_task(self):
task_arn = self._get_param("TaskArn")
self.datasync_backend.delete_task(task_arn)
return json.dumps({})
def describe_task(self): def describe_task(self):
task_arn = self._get_param("TaskArn") task_arn = self._get_param("TaskArn")
if task_arn in self.datasync_backend.tasks: task = self.datasync_backend._get_task(task_arn)
task = self.datasync_backend.tasks[task_arn] return json.dumps(
return json.dumps( {
{ "TaskArn": task.arn,
"TaskArn": task.arn, "Status": task.status,
"Name": task.name, "Name": task.name,
"CurrentTaskExecutionArn": task.current_task_execution_arn, "CurrentTaskExecutionArn": task.current_task_execution_arn,
"Status": task.status, "SourceLocationArn": task.source_location_arn,
"SourceLocationArn": task.source_location_arn, "DestinationLocationArn": task.destination_location_arn,
"DestinationLocationArn": task.destination_location_arn, "CloudWatchLogGroupArn": task.metadata["CloudWatchLogGroupArn"],
} "Options": task.metadata["Options"],
) "Excludes": task.metadata["Excludes"],
raise InvalidRequestException }
)
def start_task_execution(self): def start_task_execution(self):
task_arn = self._get_param("TaskArn") task_arn = self._get_param("TaskArn")
if task_arn in self.datasync_backend.tasks: arn = self.datasync_backend.start_task_execution(task_arn)
arn = self.datasync_backend.start_task_execution(task_arn) return json.dumps({"TaskExecutionArn": arn})
if arn:
return json.dumps({"TaskExecutionArn": arn})
raise InvalidRequestException("Invalid request.")
def cancel_task_execution(self): def cancel_task_execution(self):
task_execution_arn = self._get_param("TaskExecutionArn") task_execution_arn = self._get_param("TaskExecutionArn")
@ -135,21 +151,12 @@ class DataSyncResponse(BaseResponse):
def describe_task_execution(self): def describe_task_execution(self):
task_execution_arn = self._get_param("TaskExecutionArn") task_execution_arn = self._get_param("TaskExecutionArn")
task_execution = self.datasync_backend._get_task_execution(task_execution_arn)
if task_execution_arn in self.datasync_backend.task_executions: result = json.dumps(
task_execution = self.datasync_backend.task_executions[task_execution_arn] {"TaskExecutionArn": task_execution.arn, "Status": task_execution.status,}
if task_execution: )
result = json.dumps( if task_execution.status == "SUCCESS":
{ self.datasync_backend.tasks[task_execution.task_arn].status = "AVAILABLE"
"TaskExecutionArn": task_execution.arn, # Simulate task being executed
"Status": task_execution.status, task_execution.iterate_status()
} return result
)
if task_execution.status == "SUCCESS":
self.datasync_backend.tasks[
task_execution.task_arn
].status = "AVAILABLE"
# Simulate task being executed
task_execution.iterate_status()
return result
raise InvalidRequestException

View File

@ -77,6 +77,7 @@ class DynamoType(object):
attr, list_index = attribute_is_list(attr) attr, list_index = attribute_is_list(attr)
if not key: if not key:
# {'S': value} ==> {'S': new_value} # {'S': value} ==> {'S': new_value}
self.type = new_value.type
self.value = new_value.value self.value = new_value.value
else: else:
if attr not in self.value: # nonexistingattribute if attr not in self.value: # nonexistingattribute

View File

@ -214,6 +214,7 @@ class NetworkInterface(TaggedEC2Resource):
ec2_backend, ec2_backend,
subnet, subnet,
private_ip_address, private_ip_address,
private_ip_addresses=None,
device_index=0, device_index=0,
public_ip_auto_assign=True, public_ip_auto_assign=True,
group_ids=None, group_ids=None,
@ -223,6 +224,7 @@ class NetworkInterface(TaggedEC2Resource):
self.id = random_eni_id() self.id = random_eni_id()
self.device_index = device_index self.device_index = device_index
self.private_ip_address = private_ip_address or random_private_ip() self.private_ip_address = private_ip_address or random_private_ip()
self.private_ip_addresses = private_ip_addresses
self.subnet = subnet self.subnet = subnet
self.instance = None self.instance = None
self.attachment_id = None self.attachment_id = None
@ -341,12 +343,19 @@ class NetworkInterfaceBackend(object):
super(NetworkInterfaceBackend, self).__init__() super(NetworkInterfaceBackend, self).__init__()
def create_network_interface( def create_network_interface(
self, subnet, private_ip_address, group_ids=None, description=None, **kwargs self,
subnet,
private_ip_address,
private_ip_addresses=None,
group_ids=None,
description=None,
**kwargs
): ):
eni = NetworkInterface( eni = NetworkInterface(
self, self,
subnet, subnet,
private_ip_address, private_ip_address,
private_ip_addresses,
group_ids=group_ids, group_ids=group_ids,
description=description, description=description,
**kwargs **kwargs
@ -2435,6 +2444,7 @@ class VPC(TaggedEC2Resource):
self.instance_tenancy = instance_tenancy self.instance_tenancy = instance_tenancy
self.is_default = "true" if is_default else "false" self.is_default = "true" if is_default else "false"
self.enable_dns_support = "true" self.enable_dns_support = "true"
self.classic_link_enabled = "false"
# This attribute is set to 'true' only for default VPCs # This attribute is set to 'true' only for default VPCs
# or VPCs created using the wizard of the VPC console # or VPCs created using the wizard of the VPC console
self.enable_dns_hostnames = "true" if is_default else "false" self.enable_dns_hostnames = "true" if is_default else "false"
@ -2531,6 +2541,32 @@ class VPC(TaggedEC2Resource):
self.cidr_block_association_set[association_id] = association_set self.cidr_block_association_set[association_id] = association_set
return association_set return association_set
def enable_vpc_classic_link(self):
# Check if current cidr block doesn't fall within the 10.0.0.0/8 block, excluding 10.0.0.0/16 and 10.1.0.0/16.
# Doesn't check any route tables, maybe something for in the future?
# See https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/vpc-classiclink.html#classiclink-limitations
network_address = ipaddress.ip_network(self.cidr_block).network_address
if (
network_address not in ipaddress.ip_network("10.0.0.0/8")
or network_address in ipaddress.ip_network("10.0.0.0/16")
or network_address in ipaddress.ip_network("10.1.0.0/16")
):
self.classic_link_enabled = "true"
return self.classic_link_enabled
def disable_vpc_classic_link(self):
self.classic_link_enabled = "false"
return self.classic_link_enabled
def enable_vpc_classic_link_dns_support(self):
self.classic_link_dns_supported = "true"
return self.classic_link_dns_supported
def disable_vpc_classic_link_dns_support(self):
self.classic_link_dns_supported = "false"
return self.classic_link_dns_supported
def disassociate_vpc_cidr_block(self, association_id): def disassociate_vpc_cidr_block(self, association_id):
if self.cidr_block == self.cidr_block_association_set.get( if self.cidr_block == self.cidr_block_association_set.get(
association_id, {} association_id, {}
@ -2661,6 +2697,22 @@ class VPCBackend(object):
else: else:
raise InvalidParameterValueError(attr_name) raise InvalidParameterValueError(attr_name)
def enable_vpc_classic_link(self, vpc_id):
vpc = self.get_vpc(vpc_id)
return vpc.enable_vpc_classic_link()
def disable_vpc_classic_link(self, vpc_id):
vpc = self.get_vpc(vpc_id)
return vpc.disable_vpc_classic_link()
def enable_vpc_classic_link_dns_support(self, vpc_id):
vpc = self.get_vpc(vpc_id)
return vpc.enable_vpc_classic_link_dns_support()
def disable_vpc_classic_link_dns_support(self, vpc_id):
vpc = self.get_vpc(vpc_id)
return vpc.disable_vpc_classic_link_dns_support()
def modify_vpc_attribute(self, vpc_id, attr_name, attr_value): def modify_vpc_attribute(self, vpc_id, attr_name, attr_value):
vpc = self.get_vpc(vpc_id) vpc = self.get_vpc(vpc_id)
if attr_name in ("enable_dns_support", "enable_dns_hostnames"): if attr_name in ("enable_dns_support", "enable_dns_hostnames"):
@ -2819,6 +2871,9 @@ class Subnet(TaggedEC2Resource):
self.vpc_id = vpc_id self.vpc_id = vpc_id
self.cidr_block = cidr_block self.cidr_block = cidr_block
self.cidr = ipaddress.IPv4Network(six.text_type(self.cidr_block), strict=False) self.cidr = ipaddress.IPv4Network(six.text_type(self.cidr_block), strict=False)
self._available_ip_addresses = (
ipaddress.IPv4Network(six.text_type(self.cidr_block)).num_addresses - 5
)
self._availability_zone = availability_zone self._availability_zone = availability_zone
self.default_for_az = default_for_az self.default_for_az = default_for_az
self.map_public_ip_on_launch = map_public_ip_on_launch self.map_public_ip_on_launch = map_public_ip_on_launch
@ -2854,6 +2909,21 @@ class Subnet(TaggedEC2Resource):
return subnet return subnet
@property
def available_ip_addresses(self):
enis = [
eni
for eni in self.ec2_backend.get_all_network_interfaces()
if eni.subnet.id == self.id
]
addresses_taken = [
eni.private_ip_address for eni in enis if eni.private_ip_address
]
for eni in enis:
if eni.private_ip_addresses:
addresses_taken.extend(eni.private_ip_addresses)
return str(self._available_ip_addresses - len(addresses_taken))
@property @property
def availability_zone(self): def availability_zone(self):
return self._availability_zone.name return self._availability_zone.name

View File

@ -7,12 +7,13 @@ class ElasticNetworkInterfaces(BaseResponse):
def create_network_interface(self): def create_network_interface(self):
subnet_id = self._get_param("SubnetId") subnet_id = self._get_param("SubnetId")
private_ip_address = self._get_param("PrivateIpAddress") private_ip_address = self._get_param("PrivateIpAddress")
private_ip_addresses = self._get_multi_param("PrivateIpAddresses")
groups = self._get_multi_param("SecurityGroupId") groups = self._get_multi_param("SecurityGroupId")
subnet = self.ec2_backend.get_subnet(subnet_id) subnet = self.ec2_backend.get_subnet(subnet_id)
description = self._get_param("Description") description = self._get_param("Description")
if self.is_not_dryrun("CreateNetworkInterface"): if self.is_not_dryrun("CreateNetworkInterface"):
eni = self.ec2_backend.create_network_interface( eni = self.ec2_backend.create_network_interface(
subnet, private_ip_address, groups, description subnet, private_ip_address, private_ip_addresses, groups, description
) )
template = self.response_template(CREATE_NETWORK_INTERFACE_RESPONSE) template = self.response_template(CREATE_NETWORK_INTERFACE_RESPONSE)
return template.render(eni=eni) return template.render(eni=eni)

View File

@ -53,7 +53,7 @@ CREATE_SUBNET_RESPONSE = """
<state>pending</state> <state>pending</state>
<vpcId>{{ subnet.vpc_id }}</vpcId> <vpcId>{{ subnet.vpc_id }}</vpcId>
<cidrBlock>{{ subnet.cidr_block }}</cidrBlock> <cidrBlock>{{ subnet.cidr_block }}</cidrBlock>
<availableIpAddressCount>251</availableIpAddressCount> <availableIpAddressCount>{{ subnet.available_ip_addresses }}</availableIpAddressCount>
<availabilityZone>{{ subnet._availability_zone.name }}</availabilityZone> <availabilityZone>{{ subnet._availability_zone.name }}</availabilityZone>
<availabilityZoneId>{{ subnet._availability_zone.zone_id }}</availabilityZoneId> <availabilityZoneId>{{ subnet._availability_zone.zone_id }}</availabilityZoneId>
<defaultForAz>{{ subnet.default_for_az }}</defaultForAz> <defaultForAz>{{ subnet.default_for_az }}</defaultForAz>
@ -81,7 +81,7 @@ DESCRIBE_SUBNETS_RESPONSE = """
<state>available</state> <state>available</state>
<vpcId>{{ subnet.vpc_id }}</vpcId> <vpcId>{{ subnet.vpc_id }}</vpcId>
<cidrBlock>{{ subnet.cidr_block }}</cidrBlock> <cidrBlock>{{ subnet.cidr_block }}</cidrBlock>
<availableIpAddressCount>251</availableIpAddressCount> <availableIpAddressCount>{{ subnet.available_ip_addresses }}</availableIpAddressCount>
<availabilityZone>{{ subnet._availability_zone.name }}</availabilityZone> <availabilityZone>{{ subnet._availability_zone.name }}</availabilityZone>
<availabilityZoneId>{{ subnet._availability_zone.zone_id }}</availabilityZoneId> <availabilityZoneId>{{ subnet._availability_zone.zone_id }}</availabilityZoneId>
<defaultForAz>{{ subnet.default_for_az }}</defaultForAz> <defaultForAz>{{ subnet.default_for_az }}</defaultForAz>

View File

@ -5,6 +5,13 @@ from moto.ec2.utils import filters_from_querystring
class VPCs(BaseResponse): class VPCs(BaseResponse):
def _get_doc_date(self):
return (
"2013-10-15"
if "Boto/" in self.headers.get("user-agent", "")
else "2016-11-15"
)
def create_vpc(self): def create_vpc(self):
cidr_block = self._get_param("CidrBlock") cidr_block = self._get_param("CidrBlock")
instance_tenancy = self._get_param("InstanceTenancy", if_none="default") instance_tenancy = self._get_param("InstanceTenancy", if_none="default")
@ -16,11 +23,7 @@ class VPCs(BaseResponse):
instance_tenancy, instance_tenancy,
amazon_provided_ipv6_cidr_block=amazon_provided_ipv6_cidr_blocks, amazon_provided_ipv6_cidr_block=amazon_provided_ipv6_cidr_blocks,
) )
doc_date = ( doc_date = self._get_doc_date()
"2013-10-15"
if "Boto/" in self.headers.get("user-agent", "")
else "2016-11-15"
)
template = self.response_template(CREATE_VPC_RESPONSE) template = self.response_template(CREATE_VPC_RESPONSE)
return template.render(vpc=vpc, doc_date=doc_date) return template.render(vpc=vpc, doc_date=doc_date)
@ -50,6 +53,64 @@ class VPCs(BaseResponse):
template = self.response_template(DESCRIBE_VPC_ATTRIBUTE_RESPONSE) template = self.response_template(DESCRIBE_VPC_ATTRIBUTE_RESPONSE)
return template.render(vpc_id=vpc_id, attribute=attribute, value=value) return template.render(vpc_id=vpc_id, attribute=attribute, value=value)
def describe_vpc_classic_link_dns_support(self):
vpc_ids = self._get_multi_param("VpcIds")
filters = filters_from_querystring(self.querystring)
vpcs = self.ec2_backend.get_all_vpcs(vpc_ids=vpc_ids, filters=filters)
doc_date = self._get_doc_date()
template = self.response_template(
DESCRIBE_VPC_CLASSIC_LINK_DNS_SUPPORT_RESPONSE
)
return template.render(vpcs=vpcs, doc_date=doc_date)
def enable_vpc_classic_link_dns_support(self):
vpc_id = self._get_param("VpcId")
classic_link_dns_supported = self.ec2_backend.enable_vpc_classic_link_dns_support(
vpc_id=vpc_id
)
doc_date = self._get_doc_date()
template = self.response_template(ENABLE_VPC_CLASSIC_LINK_DNS_SUPPORT_RESPONSE)
return template.render(
classic_link_dns_supported=classic_link_dns_supported, doc_date=doc_date
)
def disable_vpc_classic_link_dns_support(self):
vpc_id = self._get_param("VpcId")
classic_link_dns_supported = self.ec2_backend.disable_vpc_classic_link_dns_support(
vpc_id=vpc_id
)
doc_date = self._get_doc_date()
template = self.response_template(DISABLE_VPC_CLASSIC_LINK_DNS_SUPPORT_RESPONSE)
return template.render(
classic_link_dns_supported=classic_link_dns_supported, doc_date=doc_date
)
def describe_vpc_classic_link(self):
vpc_ids = self._get_multi_param("VpcId")
filters = filters_from_querystring(self.querystring)
vpcs = self.ec2_backend.get_all_vpcs(vpc_ids=vpc_ids, filters=filters)
doc_date = self._get_doc_date()
template = self.response_template(DESCRIBE_VPC_CLASSIC_LINK_RESPONSE)
return template.render(vpcs=vpcs, doc_date=doc_date)
def enable_vpc_classic_link(self):
vpc_id = self._get_param("VpcId")
classic_link_enabled = self.ec2_backend.enable_vpc_classic_link(vpc_id=vpc_id)
doc_date = self._get_doc_date()
template = self.response_template(ENABLE_VPC_CLASSIC_LINK_RESPONSE)
return template.render(
classic_link_enabled=classic_link_enabled, doc_date=doc_date
)
def disable_vpc_classic_link(self):
vpc_id = self._get_param("VpcId")
classic_link_enabled = self.ec2_backend.disable_vpc_classic_link(vpc_id=vpc_id)
doc_date = self._get_doc_date()
template = self.response_template(DISABLE_VPC_CLASSIC_LINK_RESPONSE)
return template.render(
classic_link_enabled=classic_link_enabled, doc_date=doc_date
)
def modify_vpc_attribute(self): def modify_vpc_attribute(self):
vpc_id = self._get_param("VpcId") vpc_id = self._get_param("VpcId")
@ -149,6 +210,56 @@ CREATE_VPC_RESPONSE = """
</vpc> </vpc>
</CreateVpcResponse>""" </CreateVpcResponse>"""
DESCRIBE_VPC_CLASSIC_LINK_DNS_SUPPORT_RESPONSE = """
<DescribeVpcClassicLinkDnsSupportResponse xmlns="http://ec2.amazonaws.com/doc/{{doc_date}}/">
<requestId>7a62c442-3484-4f42-9342-6942EXAMPLE</requestId>
<vpcs>
{% for vpc in vpcs %}
<item>
<vpcId>{{ vpc.id }}</vpcId>
<classicLinkDnsSupported>{{ vpc.classic_link_dns_supported }}</classicLinkDnsSupported>
</item>
{% endfor %}
</vpcs>
</DescribeVpcClassicLinkDnsSupportResponse>"""
ENABLE_VPC_CLASSIC_LINK_DNS_SUPPORT_RESPONSE = """
<EnableVpcClassicLinkDnsSupportResponse xmlns="http://ec2.amazonaws.com/doc/{{doc_date}}/">
<requestId>7a62c442-3484-4f42-9342-6942EXAMPLE</requestId>
<return>{{ classic_link_dns_supported }}</return>
</EnableVpcClassicLinkDnsSupportResponse>"""
DISABLE_VPC_CLASSIC_LINK_DNS_SUPPORT_RESPONSE = """
<DisableVpcClassicLinkDnsSupportResponse xmlns="http://ec2.amazonaws.com/doc/{{doc_date}}/">
<requestId>7a62c442-3484-4f42-9342-6942EXAMPLE</requestId>
<return>{{ classic_link_dns_supported }}</return>
</DisableVpcClassicLinkDnsSupportResponse>"""
DESCRIBE_VPC_CLASSIC_LINK_RESPONSE = """
<DescribeVpcClassicLinkResponse xmlns="http://ec2.amazonaws.com/doc/{{doc_date}}/">
<requestId>7a62c442-3484-4f42-9342-6942EXAMPLE</requestId>
<vpcSet>
{% for vpc in vpcs %}
<item>
<vpcId>{{ vpc.id }}</vpcId>
<classicLinkEnabled>{{ vpc.classic_link_enabled }}</classicLinkEnabled>
</item>
{% endfor %}
</vpcSet>
</DescribeVpcClassicLinkResponse>"""
ENABLE_VPC_CLASSIC_LINK_RESPONSE = """
<EnableVpcClassicLinkResponse xmlns="http://ec2.amazonaws.com/doc/{{doc_date}}/">
<requestId>7a62c442-3484-4f42-9342-6942EXAMPLE</requestId>
<return>{{ classic_link_enabled }}</return>
</EnableVpcClassicLinkResponse>"""
DISABLE_VPC_CLASSIC_LINK_RESPONSE = """
<DisableVpcClassicLinkResponse xmlns="http://ec2.amazonaws.com/doc/{{doc_date}}/">
<requestId>7a62c442-3484-4f42-9342-6942EXAMPLE</requestId>
<return>{{ classic_link_enabled }}</return>
</DisableVpcClassicLinkResponse>"""
DESCRIBE_VPCS_RESPONSE = """ DESCRIBE_VPCS_RESPONSE = """
<DescribeVpcsResponse xmlns="http://ec2.amazonaws.com/doc/{{doc_date}}/"> <DescribeVpcsResponse xmlns="http://ec2.amazonaws.com/doc/{{doc_date}}/">
<requestId>7a62c442-3484-4f42-9342-6942EXAMPLE</requestId> <requestId>7a62c442-3484-4f42-9342-6942EXAMPLE</requestId>

View File

@ -5,6 +5,7 @@ import boto3
from moto.core.exceptions import JsonRESTError from moto.core.exceptions import JsonRESTError
from moto.core import BaseBackend, BaseModel from moto.core import BaseBackend, BaseModel
from moto.sts.models import ACCOUNT_ID
class Rule(BaseModel): class Rule(BaseModel):
@ -54,6 +55,42 @@ class Rule(BaseModel):
self.targets.pop(index) self.targets.pop(index)
class EventBus(BaseModel):
def __init__(self, region_name, name):
self.region = region_name
self.name = name
self._permissions = {}
@property
def arn(self):
return "arn:aws:events:{region}:{account_id}:event-bus/{name}".format(
region=self.region, account_id=ACCOUNT_ID, name=self.name
)
@property
def policy(self):
if not len(self._permissions):
return None
policy = {"Version": "2012-10-17", "Statement": []}
for sid, permission in self._permissions.items():
policy["Statement"].append(
{
"Sid": sid,
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::{}:root".format(permission["Principal"])
},
"Action": permission["Action"],
"Resource": self.arn,
}
)
return json.dumps(policy)
class EventsBackend(BaseBackend): class EventsBackend(BaseBackend):
ACCOUNT_ID = re.compile(r"^(\d{1,12}|\*)$") ACCOUNT_ID = re.compile(r"^(\d{1,12}|\*)$")
STATEMENT_ID = re.compile(r"^[a-zA-Z0-9-_]{1,64}$") STATEMENT_ID = re.compile(r"^[a-zA-Z0-9-_]{1,64}$")
@ -65,13 +102,19 @@ class EventsBackend(BaseBackend):
self.rules_order = [] self.rules_order = []
self.next_tokens = {} self.next_tokens = {}
self.region_name = region_name self.region_name = region_name
self.permissions = {} self.event_buses = {}
self.event_sources = {}
self._add_default_event_bus()
def reset(self): def reset(self):
region_name = self.region_name region_name = self.region_name
self.__dict__ = {} self.__dict__ = {}
self.__init__(region_name) self.__init__(region_name)
def _add_default_event_bus(self):
self.event_buses["default"] = EventBus(self.region_name, "default")
def _get_rule_by_index(self, i): def _get_rule_by_index(self, i):
return self.rules.get(self.rules_order[i]) return self.rules.get(self.rules_order[i])
@ -221,9 +264,17 @@ class EventsBackend(BaseBackend):
def test_event_pattern(self): def test_event_pattern(self):
raise NotImplementedError() raise NotImplementedError()
def put_permission(self, action, principal, statement_id): def put_permission(self, event_bus_name, action, principal, statement_id):
if not event_bus_name:
event_bus_name = "default"
event_bus = self.describe_event_bus(event_bus_name)
if action is None or action != "events:PutEvents": if action is None or action != "events:PutEvents":
raise JsonRESTError("InvalidParameterValue", "Action must be PutEvents") raise JsonRESTError(
"ValidationException",
"Provided value in parameter 'action' is not supported.",
)
if principal is None or self.ACCOUNT_ID.match(principal) is None: if principal is None or self.ACCOUNT_ID.match(principal) is None:
raise JsonRESTError( raise JsonRESTError(
@ -235,34 +286,81 @@ class EventsBackend(BaseBackend):
"InvalidParameterValue", "StatementId must match ^[a-zA-Z0-9-_]{1,64}$" "InvalidParameterValue", "StatementId must match ^[a-zA-Z0-9-_]{1,64}$"
) )
self.permissions[statement_id] = {"action": action, "principal": principal} event_bus._permissions[statement_id] = {
"Action": action,
"Principal": principal,
}
def remove_permission(self, statement_id): def remove_permission(self, event_bus_name, statement_id):
try: if not event_bus_name:
del self.permissions[statement_id] event_bus_name = "default"
except KeyError:
raise JsonRESTError("ResourceNotFoundException", "StatementId not found")
def describe_event_bus(self): event_bus = self.describe_event_bus(event_bus_name)
arn = "arn:aws:events:{0}:000000000000:event-bus/default".format(
self.region_name if not len(event_bus._permissions):
) raise JsonRESTError(
statements = [] "ResourceNotFoundException", "EventBus does not have a policy."
for statement_id, data in self.permissions.items():
statements.append(
{
"Sid": statement_id,
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::{0}:root".format(data["principal"])
},
"Action": data["action"],
"Resource": arn,
}
) )
policy = {"Version": "2012-10-17", "Statement": statements}
policy_json = json.dumps(policy) if not event_bus._permissions.pop(statement_id, None):
return {"Policy": policy_json, "Name": "default", "Arn": arn} raise JsonRESTError(
"ResourceNotFoundException",
"Statement with the provided id does not exist.",
)
def describe_event_bus(self, name):
if not name:
name = "default"
event_bus = self.event_buses.get(name)
if not event_bus:
raise JsonRESTError(
"ResourceNotFoundException",
"Event bus {} does not exist.".format(name),
)
return event_bus
def create_event_bus(self, name, event_source_name):
if name in self.event_buses:
raise JsonRESTError(
"ResourceAlreadyExistsException",
"Event bus {} already exists.".format(name),
)
if not event_source_name and "/" in name:
raise JsonRESTError(
"ValidationException", "Event bus name must not contain '/'."
)
if event_source_name and event_source_name not in self.event_sources:
raise JsonRESTError(
"ResourceNotFoundException",
"Event source {} does not exist.".format(event_source_name),
)
self.event_buses[name] = EventBus(self.region_name, name)
return self.event_buses[name]
def list_event_buses(self, name_prefix):
if name_prefix:
return [
event_bus
for event_bus in self.event_buses.values()
if event_bus.name.startswith(name_prefix)
]
return list(self.event_buses.values())
def delete_event_bus(self, name):
if name == "default":
raise JsonRESTError(
"ValidationException", "Cannot delete event bus default."
)
self.event_buses.pop(name, None)
available_regions = boto3.session.Session().get_available_regions("events") available_regions = boto3.session.Session().get_available_regions("events")

View File

@ -238,20 +238,68 @@ class EventsHandler(BaseResponse):
pass pass
def put_permission(self): def put_permission(self):
event_bus_name = self._get_param("EventBusName")
action = self._get_param("Action") action = self._get_param("Action")
principal = self._get_param("Principal") principal = self._get_param("Principal")
statement_id = self._get_param("StatementId") statement_id = self._get_param("StatementId")
self.events_backend.put_permission(action, principal, statement_id) self.events_backend.put_permission(
event_bus_name, action, principal, statement_id
)
return "" return ""
def remove_permission(self): def remove_permission(self):
event_bus_name = self._get_param("EventBusName")
statement_id = self._get_param("StatementId") statement_id = self._get_param("StatementId")
self.events_backend.remove_permission(statement_id) self.events_backend.remove_permission(event_bus_name, statement_id)
return "" return ""
def describe_event_bus(self): def describe_event_bus(self):
return json.dumps(self.events_backend.describe_event_bus()) name = self._get_param("Name")
event_bus = self.events_backend.describe_event_bus(name)
response = {
"Name": event_bus.name,
"Arn": event_bus.arn,
}
if event_bus.policy:
response["Policy"] = event_bus.policy
return json.dumps(response), self.response_headers
def create_event_bus(self):
name = self._get_param("Name")
event_source_name = self._get_param("EventSourceName")
event_bus = self.events_backend.create_event_bus(name, event_source_name)
return json.dumps({"EventBusArn": event_bus.arn}), self.response_headers
def list_event_buses(self):
name_prefix = self._get_param("NamePrefix")
# ToDo: add 'NextToken' & 'Limit' parameters
response = []
for event_bus in self.events_backend.list_event_buses(name_prefix):
event_bus_response = {
"Name": event_bus.name,
"Arn": event_bus.arn,
}
if event_bus.policy:
event_bus_response["Policy"] = event_bus.policy
response.append(event_bus_response)
return json.dumps({"EventBuses": response}), self.response_headers
def delete_event_bus(self):
name = self._get_param("Name")
self.events_backend.delete_event_bus(name)
return "", self.response_headers

View File

@ -1,5 +1,6 @@
from __future__ import unicode_literals from __future__ import unicode_literals
import base64 import base64
import hashlib
import os import os
import random import random
import string import string
@ -475,6 +476,20 @@ class AccessKey(BaseModel):
raise UnformattedGetAttTemplateException() raise UnformattedGetAttTemplateException()
class SshPublicKey(BaseModel):
def __init__(self, user_name, ssh_public_key_body):
self.user_name = user_name
self.ssh_public_key_body = ssh_public_key_body
self.ssh_public_key_id = "APKA" + random_access_key()
self.fingerprint = hashlib.md5(ssh_public_key_body.encode()).hexdigest()
self.status = "Active"
self.upload_date = datetime.utcnow()
@property
def uploaded_iso_8601(self):
return iso_8601_datetime_without_milliseconds(self.upload_date)
class Group(BaseModel): class Group(BaseModel):
def __init__(self, name, path="/"): def __init__(self, name, path="/"):
self.name = name self.name = name
@ -536,6 +551,7 @@ class User(BaseModel):
self.policies = {} self.policies = {}
self.managed_policies = {} self.managed_policies = {}
self.access_keys = [] self.access_keys = []
self.ssh_public_keys = []
self.password = None self.password = None
self.password_reset_required = False self.password_reset_required = False
self.signing_certificates = {} self.signing_certificates = {}
@ -605,6 +621,33 @@ class User(BaseModel):
"The Access Key with id {0} cannot be found".format(access_key_id) "The Access Key with id {0} cannot be found".format(access_key_id)
) )
def upload_ssh_public_key(self, ssh_public_key_body):
pubkey = SshPublicKey(self.name, ssh_public_key_body)
self.ssh_public_keys.append(pubkey)
return pubkey
def get_ssh_public_key(self, ssh_public_key_id):
for key in self.ssh_public_keys:
if key.ssh_public_key_id == ssh_public_key_id:
return key
else:
raise IAMNotFoundException(
"The SSH Public Key with id {0} cannot be found".format(
ssh_public_key_id
)
)
def get_all_ssh_public_keys(self):
return self.ssh_public_keys
def update_ssh_public_key(self, ssh_public_key_id, status):
key = self.get_ssh_public_key(ssh_public_key_id)
key.status = status
def delete_ssh_public_key(self, ssh_public_key_id):
key = self.get_ssh_public_key(ssh_public_key_id)
self.ssh_public_keys.remove(key)
def get_cfn_attribute(self, attribute_name): def get_cfn_attribute(self, attribute_name):
from moto.cloudformation.exceptions import UnformattedGetAttTemplateException from moto.cloudformation.exceptions import UnformattedGetAttTemplateException
@ -719,7 +762,7 @@ class AccountPasswordPolicy(BaseModel):
def _format_error(self, key, value, constraint): def _format_error(self, key, value, constraint):
return 'Value "{value}" at "{key}" failed to satisfy constraint: {constraint}'.format( return 'Value "{value}" at "{key}" failed to satisfy constraint: {constraint}'.format(
constraint=constraint, key=key, value=value, constraint=constraint, key=key, value=value
) )
def _raise_errors(self): def _raise_errors(self):
@ -731,11 +774,139 @@ class AccountPasswordPolicy(BaseModel):
raise ValidationError( raise ValidationError(
"{count} validation error{plural} detected: {errors}".format( "{count} validation error{plural} detected: {errors}".format(
count=count, plural=plural, errors=errors, count=count, plural=plural, errors=errors
) )
) )
class AccountSummary(BaseModel):
def __init__(self, iam_backend):
self._iam_backend = iam_backend
self._group_policy_size_quota = 5120
self._instance_profiles_quota = 1000
self._groups_per_user_quota = 10
self._attached_policies_per_user_quota = 10
self._policies_quota = 1500
self._account_mfa_enabled = 0 # Haven't found any information being able to activate MFA for the root account programmatically
self._access_keys_per_user_quota = 2
self._assume_role_policy_size_quota = 2048
self._policy_versions_in_use_quota = 10000
self._global_endpoint_token_version = (
1 # ToDo: Implement set_security_token_service_preferences()
)
self._versions_per_policy_quota = 5
self._attached_policies_per_group_quota = 10
self._policy_size_quota = 6144
self._account_signing_certificates_present = 0 # valid values: 0 | 1
self._users_quota = 5000
self._server_certificates_quota = 20
self._user_policy_size_quota = 2048
self._roles_quota = 1000
self._signing_certificates_per_user_quota = 2
self._role_policy_size_quota = 10240
self._attached_policies_per_role_quota = 10
self._account_access_keys_present = 0 # valid values: 0 | 1
self._groups_quota = 300
@property
def summary_map(self):
return {
"GroupPolicySizeQuota": self._group_policy_size_quota,
"InstanceProfilesQuota": self._instance_profiles_quota,
"Policies": self._policies,
"GroupsPerUserQuota": self._groups_per_user_quota,
"InstanceProfiles": self._instance_profiles,
"AttachedPoliciesPerUserQuota": self._attached_policies_per_user_quota,
"Users": self._users,
"PoliciesQuota": self._policies_quota,
"Providers": self._providers,
"AccountMFAEnabled": self._account_mfa_enabled,
"AccessKeysPerUserQuota": self._access_keys_per_user_quota,
"AssumeRolePolicySizeQuota": self._assume_role_policy_size_quota,
"PolicyVersionsInUseQuota": self._policy_versions_in_use_quota,
"GlobalEndpointTokenVersion": self._global_endpoint_token_version,
"VersionsPerPolicyQuota": self._versions_per_policy_quota,
"AttachedPoliciesPerGroupQuota": self._attached_policies_per_group_quota,
"PolicySizeQuota": self._policy_size_quota,
"Groups": self._groups,
"AccountSigningCertificatesPresent": self._account_signing_certificates_present,
"UsersQuota": self._users_quota,
"ServerCertificatesQuota": self._server_certificates_quota,
"MFADevices": self._mfa_devices,
"UserPolicySizeQuota": self._user_policy_size_quota,
"PolicyVersionsInUse": self._policy_versions_in_use,
"ServerCertificates": self._server_certificates,
"Roles": self._roles,
"RolesQuota": self._roles_quota,
"SigningCertificatesPerUserQuota": self._signing_certificates_per_user_quota,
"MFADevicesInUse": self._mfa_devices_in_use,
"RolePolicySizeQuota": self._role_policy_size_quota,
"AttachedPoliciesPerRoleQuota": self._attached_policies_per_role_quota,
"AccountAccessKeysPresent": self._account_access_keys_present,
"GroupsQuota": self._groups_quota,
}
@property
def _groups(self):
return len(self._iam_backend.groups)
@property
def _instance_profiles(self):
return len(self._iam_backend.instance_profiles)
@property
def _mfa_devices(self):
# Don't know, if hardware devices are also counted here
return len(self._iam_backend.virtual_mfa_devices)
@property
def _mfa_devices_in_use(self):
devices = 0
for user in self._iam_backend.users.values():
devices += len(user.mfa_devices)
return devices
@property
def _policies(self):
customer_policies = [
policy
for policy in self._iam_backend.managed_policies
if not policy.startswith("arn:aws:iam::aws:policy")
]
return len(customer_policies)
@property
def _policy_versions_in_use(self):
attachments = 0
for policy in self._iam_backend.managed_policies.values():
attachments += policy.attachment_count
return attachments
@property
def _providers(self):
providers = len(self._iam_backend.saml_providers) + len(
self._iam_backend.open_id_providers
)
return providers
@property
def _roles(self):
return len(self._iam_backend.roles)
@property
def _server_certificates(self):
return len(self._iam_backend.certificates)
@property
def _users(self):
return len(self._iam_backend.users)
class IAMBackend(BaseBackend): class IAMBackend(BaseBackend):
def __init__(self): def __init__(self):
self.instance_profiles = {} self.instance_profiles = {}
@ -751,6 +922,7 @@ class IAMBackend(BaseBackend):
self.policy_arn_regex = re.compile(r"^arn:aws:iam::[0-9]*:policy/.*$") self.policy_arn_regex = re.compile(r"^arn:aws:iam::[0-9]*:policy/.*$")
self.virtual_mfa_devices = {} self.virtual_mfa_devices = {}
self.account_password_policy = None self.account_password_policy = None
self.account_summary = AccountSummary(self)
super(IAMBackend, self).__init__() super(IAMBackend, self).__init__()
def _init_managed_policies(self): def _init_managed_policies(self):
@ -818,6 +990,12 @@ class IAMBackend(BaseBackend):
policy = ManagedPolicy( policy = ManagedPolicy(
policy_name, description=description, document=policy_document, path=path policy_name, description=description, document=policy_document, path=path
) )
if policy.arn in self.managed_policies:
raise EntityAlreadyExists(
"A policy called {0} already exists. Duplicate names are not allowed.".format(
policy_name
)
)
self.managed_policies[policy.arn] = policy self.managed_policies[policy.arn] = policy
return policy return policy
@ -892,6 +1070,10 @@ class IAMBackend(BaseBackend):
permissions_boundary permissions_boundary
), ),
) )
if [role for role in self.get_roles() if role.name == role_name]:
raise EntityAlreadyExists(
"Role with name {0} already exists.".format(role_name)
)
clean_tags = self._tag_verification(tags) clean_tags = self._tag_verification(tags)
role = Role( role = Role(
@ -1104,11 +1286,17 @@ class IAMBackend(BaseBackend):
raise IAMNotFoundException("Policy not found") raise IAMNotFoundException("Policy not found")
def create_instance_profile(self, name, path, role_ids): def create_instance_profile(self, name, path, role_ids):
if self.instance_profiles.get(name):
raise IAMConflictException(
code="EntityAlreadyExists",
message="Instance Profile {0} already exists.".format(name),
)
instance_profile_id = random_resource_id() instance_profile_id = random_resource_id()
roles = [iam_backend.get_role_by_id(role_id) for role_id in role_ids] roles = [iam_backend.get_role_by_id(role_id) for role_id in role_ids]
instance_profile = InstanceProfile(instance_profile_id, name, path, roles) instance_profile = InstanceProfile(instance_profile_id, name, path, roles)
self.instance_profiles[instance_profile_id] = instance_profile self.instance_profiles[name] = instance_profile
return instance_profile return instance_profile
def get_instance_profile(self, profile_name): def get_instance_profile(self, profile_name):
@ -1146,7 +1334,7 @@ class IAMBackend(BaseBackend):
def get_all_server_certs(self, marker=None): def get_all_server_certs(self, marker=None):
return self.certificates.values() return self.certificates.values()
def upload_server_cert( def upload_server_certificate(
self, cert_name, cert_body, private_key, cert_chain=None, path=None self, cert_name, cert_body, private_key, cert_chain=None, path=None
): ):
certificate_id = random_resource_id() certificate_id = random_resource_id()
@ -1221,6 +1409,14 @@ class IAMBackend(BaseBackend):
group = self.get_group(group_name) group = self.get_group(group_name)
return group.get_policy(policy_name) return group.get_policy(policy_name)
def delete_group(self, group_name):
try:
del self.groups[group_name]
except KeyError:
raise IAMNotFoundException(
"The group with name {0} cannot be found.".format(group_name)
)
def create_user(self, user_name, path="/"): def create_user(self, user_name, path="/"):
if user_name in self.users: if user_name in self.users:
raise IAMConflictException( raise IAMConflictException(
@ -1431,6 +1627,26 @@ class IAMBackend(BaseBackend):
user = self.get_user(user_name) user = self.get_user(user_name)
user.delete_access_key(access_key_id) user.delete_access_key(access_key_id)
def upload_ssh_public_key(self, user_name, ssh_public_key_body):
user = self.get_user(user_name)
return user.upload_ssh_public_key(ssh_public_key_body)
def get_ssh_public_key(self, user_name, ssh_public_key_id):
user = self.get_user(user_name)
return user.get_ssh_public_key(ssh_public_key_id)
def get_all_ssh_public_keys(self, user_name):
user = self.get_user(user_name)
return user.get_all_ssh_public_keys()
def update_ssh_public_key(self, user_name, ssh_public_key_id, status):
user = self.get_user(user_name)
return user.update_ssh_public_key(ssh_public_key_id, status)
def delete_ssh_public_key(self, user_name, ssh_public_key_id):
user = self.get_user(user_name)
return user.delete_ssh_public_key(ssh_public_key_id)
def enable_mfa_device( def enable_mfa_device(
self, user_name, serial_number, authentication_code_1, authentication_code_2 self, user_name, serial_number, authentication_code_1, authentication_code_2
): ):
@ -1717,5 +1933,8 @@ class IAMBackend(BaseBackend):
self.account_password_policy = None self.account_password_policy = None
def get_account_summary(self):
return self.account_summary
iam_backend = IAMBackend() iam_backend = IAMBackend()

View File

@ -351,7 +351,7 @@ class IamResponse(BaseResponse):
private_key = self._get_param("PrivateKey") private_key = self._get_param("PrivateKey")
cert_chain = self._get_param("CertificateName") cert_chain = self._get_param("CertificateName")
cert = iam_backend.upload_server_cert( cert = iam_backend.upload_server_certificate(
cert_name, cert_body, private_key, cert_chain=cert_chain, path=path cert_name, cert_body, private_key, cert_chain=cert_chain, path=path
) )
template = self.response_template(UPLOAD_CERT_TEMPLATE) template = self.response_template(UPLOAD_CERT_TEMPLATE)
@ -428,6 +428,12 @@ class IamResponse(BaseResponse):
template = self.response_template(GET_GROUP_POLICY_TEMPLATE) template = self.response_template(GET_GROUP_POLICY_TEMPLATE)
return template.render(name="GetGroupPolicyResponse", **policy_result) return template.render(name="GetGroupPolicyResponse", **policy_result)
def delete_group(self):
group_name = self._get_param("GroupName")
iam_backend.delete_group(group_name)
template = self.response_template(GENERIC_EMPTY_TEMPLATE)
return template.render(name="DeleteGroup")
def create_user(self): def create_user(self):
user_name = self._get_param("UserName") user_name = self._get_param("UserName")
path = self._get_param("Path") path = self._get_param("Path")
@ -584,6 +590,46 @@ class IamResponse(BaseResponse):
template = self.response_template(GENERIC_EMPTY_TEMPLATE) template = self.response_template(GENERIC_EMPTY_TEMPLATE)
return template.render(name="DeleteAccessKey") return template.render(name="DeleteAccessKey")
def upload_ssh_public_key(self):
user_name = self._get_param("UserName")
ssh_public_key_body = self._get_param("SSHPublicKeyBody")
key = iam_backend.upload_ssh_public_key(user_name, ssh_public_key_body)
template = self.response_template(UPLOAD_SSH_PUBLIC_KEY_TEMPLATE)
return template.render(key=key)
def get_ssh_public_key(self):
user_name = self._get_param("UserName")
ssh_public_key_id = self._get_param("SSHPublicKeyId")
key = iam_backend.get_ssh_public_key(user_name, ssh_public_key_id)
template = self.response_template(GET_SSH_PUBLIC_KEY_TEMPLATE)
return template.render(key=key)
def list_ssh_public_keys(self):
user_name = self._get_param("UserName")
keys = iam_backend.get_all_ssh_public_keys(user_name)
template = self.response_template(LIST_SSH_PUBLIC_KEYS_TEMPLATE)
return template.render(keys=keys)
def update_ssh_public_key(self):
user_name = self._get_param("UserName")
ssh_public_key_id = self._get_param("SSHPublicKeyId")
status = self._get_param("Status")
iam_backend.update_ssh_public_key(user_name, ssh_public_key_id, status)
template = self.response_template(UPDATE_SSH_PUBLIC_KEY_TEMPLATE)
return template.render()
def delete_ssh_public_key(self):
user_name = self._get_param("UserName")
ssh_public_key_id = self._get_param("SSHPublicKeyId")
iam_backend.delete_ssh_public_key(user_name, ssh_public_key_id)
template = self.response_template(DELETE_SSH_PUBLIC_KEY_TEMPLATE)
return template.render()
def deactivate_mfa_device(self): def deactivate_mfa_device(self):
user_name = self._get_param("UserName") user_name = self._get_param("UserName")
serial_number = self._get_param("SerialNumber") serial_number = self._get_param("SerialNumber")
@ -882,6 +928,12 @@ class IamResponse(BaseResponse):
template = self.response_template(DELETE_ACCOUNT_PASSWORD_POLICY_TEMPLATE) template = self.response_template(DELETE_ACCOUNT_PASSWORD_POLICY_TEMPLATE)
return template.render() return template.render()
def get_account_summary(self):
account_summary = iam_backend.get_account_summary()
template = self.response_template(GET_ACCOUNT_SUMMARY_TEMPLATE)
return template.render(summary_map=account_summary.summary_map)
LIST_ENTITIES_FOR_POLICY_TEMPLATE = """<ListEntitiesForPolicyResponse> LIST_ENTITIES_FOR_POLICY_TEMPLATE = """<ListEntitiesForPolicyResponse>
<ListEntitiesForPolicyResult> <ListEntitiesForPolicyResult>
@ -1684,6 +1736,73 @@ GET_ACCESS_KEY_LAST_USED_TEMPLATE = """
</GetAccessKeyLastUsedResponse> </GetAccessKeyLastUsedResponse>
""" """
UPLOAD_SSH_PUBLIC_KEY_TEMPLATE = """<UploadSSHPublicKeyResponse>
<UploadSSHPublicKeyResult>
<SSHPublicKey>
<UserName>{{ key.user_name }}</UserName>
<SSHPublicKeyBody>{{ key.ssh_public_key_body }}</SSHPublicKeyBody>
<SSHPublicKeyId>{{ key.ssh_public_key_id }}</SSHPublicKeyId>
<Fingerprint>{{ key.fingerprint }}</Fingerprint>
<Status>{{ key.status }}</Status>
<UploadDate>{{ key.uploaded_iso_8601 }}</UploadDate>
</SSHPublicKey>
</UploadSSHPublicKeyResult>
<ResponseMetadata>
<RequestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</RequestId>
</ResponseMetadata>
</UploadSSHPublicKeyResponse>"""
GET_SSH_PUBLIC_KEY_TEMPLATE = """<GetSSHPublicKeyResponse>
<GetSSHPublicKeyResult>
<SSHPublicKey>
<UserName>{{ key.user_name }}</UserName>
<SSHPublicKeyBody>{{ key.ssh_public_key_body }}</SSHPublicKeyBody>
<SSHPublicKeyId>{{ key.ssh_public_key_id }}</SSHPublicKeyId>
<Fingerprint>{{ key.fingerprint }}</Fingerprint>
<Status>{{ key.status }}</Status>
<UploadDate>{{ key.uploaded_iso_8601 }}</UploadDate>
</SSHPublicKey>
</GetSSHPublicKeyResult>
<ResponseMetadata>
<RequestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</RequestId>
</ResponseMetadata>
</GetSSHPublicKeyResponse>"""
LIST_SSH_PUBLIC_KEYS_TEMPLATE = """<ListSSHPublicKeysResponse>
<ListSSHPublicKeysResult>
<SSHPublicKeys>
{% for key in keys %}
<member>
<UserName>{{ key.user_name }}</UserName>
<SSHPublicKeyId>{{ key.ssh_public_key_id }}</SSHPublicKeyId>
<Status>{{ key.status }}</Status>
<UploadDate>{{ key.uploaded_iso_8601 }}</UploadDate>
</member>
{% endfor %}
</SSHPublicKeys>
<IsTruncated>false</IsTruncated>
</ListSSHPublicKeysResult>
<ResponseMetadata>
<RequestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</RequestId>
</ResponseMetadata>
</ListSSHPublicKeysResponse>"""
UPDATE_SSH_PUBLIC_KEY_TEMPLATE = """<UpdateSSHPublicKeyResponse>
<UpdateSSHPublicKeyResult>
</UpdateSSHPublicKeyResult>
<ResponseMetadata>
<RequestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</RequestId>
</ResponseMetadata>
</UpdateSSHPublicKeyResponse>"""
DELETE_SSH_PUBLIC_KEY_TEMPLATE = """<DeleteSSHPublicKeyResponse>
<DeleteSSHPublicKeyResult>
</DeleteSSHPublicKeyResult>
<ResponseMetadata>
<RequestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</RequestId>
</ResponseMetadata>
</DeleteSSHPublicKeyResponse>"""
CREDENTIAL_REPORT_GENERATING = """ CREDENTIAL_REPORT_GENERATING = """
<GenerateCredentialReportResponse> <GenerateCredentialReportResponse>
<GenerateCredentialReportResult> <GenerateCredentialReportResult>
@ -2255,3 +2374,20 @@ DELETE_ACCOUNT_PASSWORD_POLICY_TEMPLATE = """<DeleteAccountPasswordPolicyRespons
<RequestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</RequestId> <RequestId>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</RequestId>
</ResponseMetadata> </ResponseMetadata>
</DeleteAccountPasswordPolicyResponse>""" </DeleteAccountPasswordPolicyResponse>"""
GET_ACCOUNT_SUMMARY_TEMPLATE = """<GetAccountSummaryResponse xmlns="https://iam.amazonaws.com/doc/2010-05-08/">
<GetAccountSummaryResult>
<SummaryMap>
{% for key, value in summary_map.items() %}
<entry>
<key>{{ key }}</key>
<value>{{ value }}</value>
</entry>
{% endfor %}
</SummaryMap>
</GetAccountSummaryResult>
<ResponseMetadata>
<RequestId>85cb9b90-ac28-11e4-a88d-97964EXAMPLE</RequestId>
</ResponseMetadata>
</GetAccountSummaryResponse>"""

View File

@ -269,10 +269,32 @@ class OrganizationsBackend(BaseBackend):
) )
return account return account
def get_account_by_attr(self, attr, value):
account = next(
(
account
for account in self.accounts
if hasattr(account, attr) and getattr(account, attr) == value
),
None,
)
if account is None:
raise RESTError(
"AccountNotFoundException",
"You specified an account that doesn't exist.",
)
return account
def describe_account(self, **kwargs): def describe_account(self, **kwargs):
account = self.get_account_by_id(kwargs["AccountId"]) account = self.get_account_by_id(kwargs["AccountId"])
return account.describe() return account.describe()
def describe_create_account_status(self, **kwargs):
account = self.get_account_by_attr(
"create_account_status_id", kwargs["CreateAccountRequestId"]
)
return account.create_account_status
def list_accounts(self): def list_accounts(self):
return dict( return dict(
Accounts=[account.describe()["Account"] for account in self.accounts] Accounts=[account.describe()["Account"] for account in self.accounts]

View File

@ -65,6 +65,13 @@ class OrganizationsResponse(BaseResponse):
self.organizations_backend.describe_account(**self.request_params) self.organizations_backend.describe_account(**self.request_params)
) )
def describe_create_account_status(self):
return json.dumps(
self.organizations_backend.describe_create_account_status(
**self.request_params
)
)
def list_accounts(self): def list_accounts(self):
return json.dumps(self.organizations_backend.list_accounts()) return json.dumps(self.organizations_backend.list_accounts())

View File

@ -125,7 +125,7 @@ class HTTPrettyRequest(BaseHTTPRequestHandler, BaseClass):
internal `parse_request` method. internal `parse_request` method.
It also replaces the `rfile` and `wfile` attributes with StringIO It also replaces the `rfile` and `wfile` attributes with StringIO
instances so that we garantee that it won't make any I/O, neighter instances so that we guarantee that it won't make any I/O, neighter
for writing nor reading. for writing nor reading.
It has some convenience attributes: It has some convenience attributes:

View File

@ -14,23 +14,21 @@ class ResourceNotFoundException(SecretsManagerClientError):
) )
# Using specialised exception due to the use of a non-ASCII character
class SecretNotFoundException(SecretsManagerClientError): class SecretNotFoundException(SecretsManagerClientError):
def __init__(self): def __init__(self):
self.code = 404 self.code = 404
super(SecretNotFoundException, self).__init__( super(SecretNotFoundException, self).__init__(
"ResourceNotFoundException", "ResourceNotFoundException",
message="Secrets Manager can\u2019t find the specified secret.", message="Secrets Manager can't find the specified secret.",
) )
# Using specialised exception due to the use of a non-ASCII character
class SecretHasNoValueException(SecretsManagerClientError): class SecretHasNoValueException(SecretsManagerClientError):
def __init__(self, version_stage): def __init__(self, version_stage):
self.code = 404 self.code = 404
super(SecretHasNoValueException, self).__init__( super(SecretHasNoValueException, self).__init__(
"ResourceNotFoundException", "ResourceNotFoundException",
message="Secrets Manager can\u2019t find the specified secret " message="Secrets Manager can't find the specified secret "
"value for staging label: {}".format(version_stage), "value for staging label: {}".format(version_stage),
) )

View File

@ -190,7 +190,7 @@ def create_backend_app(service):
index = 2 index = 2
while endpoint in backend_app.view_functions: while endpoint in backend_app.view_functions:
# HACK: Sometimes we map the same view to multiple url_paths. Flask # HACK: Sometimes we map the same view to multiple url_paths. Flask
# requries us to have different names. # requires us to have different names.
endpoint = original_endpoint + str(index) endpoint = original_endpoint + str(index)
index += 1 index += 1

View File

@ -147,7 +147,7 @@ class SESBackend(BaseBackend):
def __type_of_message__(self, destinations): def __type_of_message__(self, destinations):
"""Checks the destination for any special address that could indicate delivery, """Checks the destination for any special address that could indicate delivery,
complaint or bounce like in SES simualtor""" complaint or bounce like in SES simulator"""
alladdress = ( alladdress = (
destinations.get("ToAddresses", []) destinations.get("ToAddresses", [])
+ destinations.get("CcAddresses", []) + destinations.get("CcAddresses", [])

View File

@ -227,7 +227,7 @@ class Subscription(BaseModel):
return False return False
for attribute_values in attribute_values: for attribute_values in attribute_values:
# Even the offical documentation states a 5 digits of accuracy after the decimal point for numerics, in reality it is 6 # Even the official documentation states a 5 digits of accuracy after the decimal point for numerics, in reality it is 6
# https://docs.aws.amazon.com/sns/latest/dg/sns-subscription-filter-policies.html#subscription-filter-policy-constraints # https://docs.aws.amazon.com/sns/latest/dg/sns-subscription-filter-policies.html#subscription-filter-policy-constraints
if int(attribute_values * 1000000) == int(rule * 1000000): if int(attribute_values * 1000000) == int(rule * 1000000):
return True return True
@ -573,7 +573,7 @@ class SNSBackend(BaseBackend):
combinations = 1 combinations = 1
for rules in six.itervalues(value): for rules in six.itervalues(value):
combinations *= len(rules) combinations *= len(rules)
# Even the offical documentation states the total combination of values must not exceed 100, in reality it is 150 # Even the official documentation states the total combination of values must not exceed 100, in reality it is 150
# https://docs.aws.amazon.com/sns/latest/dg/sns-subscription-filter-policies.html#subscription-filter-policy-constraints # https://docs.aws.amazon.com/sns/latest/dg/sns-subscription-filter-policies.html#subscription-filter-policy-constraints
if combinations > 150: if combinations > 150:
raise SNSInvalidParameter( raise SNSInvalidParameter(

View File

@ -77,7 +77,7 @@ class SNSResponse(BaseResponse):
transform_value = value["StringValue"] transform_value = value["StringValue"]
elif "BinaryValue" in value: elif "BinaryValue" in value:
transform_value = value["BinaryValue"] transform_value = value["BinaryValue"]
if not transform_value: if transform_value == "":
raise InvalidParameterValue( raise InvalidParameterValue(
"The message attribute '{0}' must contain non-empty " "The message attribute '{0}' must contain non-empty "
"message attribute value for message attribute " "message attribute value for message attribute "

View File

@ -761,7 +761,7 @@ class SQSBackend(BaseBackend):
new_messages = [] new_messages = []
for message in queue._messages: for message in queue._messages:
# Only delete message if it is not visible and the reciept_handle # Only delete message if it is not visible and the receipt_handle
# matches. # matches.
if message.receipt_handle == receipt_handle: if message.receipt_handle == receipt_handle:
queue.pending_messages.remove(message) queue.pending_messages.remove(message)

View File

@ -430,7 +430,7 @@ class WorkflowExecution(BaseModel):
) )
def fail(self, event_id, details=None, reason=None): def fail(self, event_id, details=None, reason=None):
# TODO: implement lenght constraints on details/reason # TODO: implement length constraints on details/reason
self.execution_status = "CLOSED" self.execution_status = "CLOSED"
self.close_status = "FAILED" self.close_status = "FAILED"
self.close_timestamp = unix_time() self.close_timestamp = unix_time()

View File

@ -7,16 +7,18 @@ import boto3
script_dir = os.path.dirname(os.path.abspath(__file__)) script_dir = os.path.dirname(os.path.abspath(__file__))
alternative_service_names = {'lambda': 'awslambda'}
def get_moto_implementation(service_name): def get_moto_implementation(service_name):
service_name_standardized = service_name.replace("-", "") if "-" in service_name else service_name service_name = service_name.replace("-", "") if "-" in service_name else service_name
if not hasattr(moto, service_name_standardized): alt_service_name = alternative_service_names[service_name] if service_name in alternative_service_names else service_name
if not hasattr(moto, alt_service_name):
return None return None
module = getattr(moto, service_name_standardized) module = getattr(moto, alt_service_name)
if module is None: if module is None:
return None return None
mock = getattr(module, "mock_{}".format(service_name_standardized)) mock = getattr(module, "mock_{}".format(service_name))
if mock is None: if mock is None:
return None return None
backends = list(mock().backends.values()) backends = list(mock().backends.values())

View File

@ -39,7 +39,7 @@ install_requires = [
"werkzeug", "werkzeug",
"PyYAML>=5.1", "PyYAML>=5.1",
"pytz", "pytz",
"python-dateutil<3.0.0,>=2.1", "python-dateutil<2.8.1,>=2.1",
"python-jose<4.0.0", "python-jose<4.0.0",
"mock", "mock",
"docker>=2.5.1", "docker>=2.5.1",

View File

@ -164,7 +164,7 @@ if settings.TEST_SERVER_MODE:
conn = boto3.client("lambda", "us-west-2") conn = boto3.client("lambda", "us-west-2")
conn.create_function( conn.create_function(
FunctionName="testFunction", FunctionName="testFunction",
Runtime="python2.7", Runtime="python3.7",
Role=get_role_name(), Role=get_role_name(),
Handler="lambda_function.lambda_handler", Handler="lambda_function.lambda_handler",
Code={"ZipFile": get_test_zip_file2()}, Code={"ZipFile": get_test_zip_file2()},
@ -186,18 +186,20 @@ if settings.TEST_SERVER_MODE:
vol.id, vol.id,
vol.state, vol.state,
vol.size, vol.size,
json.dumps(in_data), json.dumps(in_data).replace(
" ", ""
), # Makes the tests pass as the result is missing the whitespace
) )
log_result = base64.b64decode(result["LogResult"]).decode("utf-8") log_result = base64.b64decode(result["LogResult"]).decode("utf-8")
# fix for running under travis (TODO: investigate why it has an extra newline) # The Docker lambda invocation will return an additional '\n', so need to replace it:
log_result = log_result.replace("\n\n", "\n") log_result = log_result.replace("\n\n", "\n")
log_result.should.equal(msg) log_result.should.equal(msg)
payload = result["Payload"].read().decode("utf-8") payload = result["Payload"].read().decode("utf-8")
# fix for running under travis (TODO: investigate why it has an extra newline) # The Docker lambda invocation will return an additional '\n', so need to replace it:
payload = payload.replace("\n\n", "\n") payload = payload.replace("\n\n", "\n")
payload.should.equal(msg) payload.should.equal(msg)

View File

@ -11,6 +11,7 @@ from nose.tools import assert_raises
from moto import mock_iam, mock_ec2, mock_s3, mock_sts, mock_elbv2, mock_rds2 from moto import mock_iam, mock_ec2, mock_s3, mock_sts, mock_elbv2, mock_rds2
from moto.core import set_initial_no_auth_action_count from moto.core import set_initial_no_auth_action_count
from moto.iam.models import ACCOUNT_ID from moto.iam.models import ACCOUNT_ID
from uuid import uuid4
@mock_iam @mock_iam
@ -71,8 +72,10 @@ def create_user_with_access_key_and_multiple_policies(
def create_group_with_attached_policy_and_add_user( def create_group_with_attached_policy_and_add_user(
user_name, policy_document, group_name="test-group", policy_name="policy1" user_name, policy_document, group_name="test-group", policy_name=None
): ):
if not policy_name:
policy_name = str(uuid4())
client = boto3.client("iam", region_name="us-east-1") client = boto3.client("iam", region_name="us-east-1")
client.create_group(GroupName=group_name) client.create_group(GroupName=group_name)
policy_arn = client.create_policy( policy_arn = client.create_policy(
@ -101,8 +104,10 @@ def create_group_with_multiple_policies_and_add_user(
attached_policy_document, attached_policy_document,
group_name="test-group", group_name="test-group",
inline_policy_name="policy1", inline_policy_name="policy1",
attached_policy_name="policy1", attached_policy_name=None,
): ):
if not attached_policy_name:
attached_policy_name = str(uuid4())
client = boto3.client("iam", region_name="us-east-1") client = boto3.client("iam", region_name="us-east-1")
client.create_group(GroupName=group_name) client.create_group(GroupName=group_name)
client.put_group_policy( client.put_group_policy(
@ -402,10 +407,10 @@ def test_s3_access_denied_with_denying_attached_group_policy():
"Statement": [{"Effect": "Deny", "Action": "s3:List*", "Resource": "*"}], "Statement": [{"Effect": "Deny", "Action": "s3:List*", "Resource": "*"}],
} }
access_key = create_user_with_access_key_and_attached_policy( access_key = create_user_with_access_key_and_attached_policy(
user_name, attached_policy_document user_name, attached_policy_document, policy_name="policy1"
) )
create_group_with_attached_policy_and_add_user( create_group_with_attached_policy_and_add_user(
user_name, group_attached_policy_document user_name, group_attached_policy_document, policy_name="policy2"
) )
client = boto3.client( client = boto3.client(
"s3", "s3",
@ -476,10 +481,16 @@ def test_access_denied_with_many_irrelevant_policies():
"Statement": [{"Effect": "Deny", "Action": "lambda:*", "Resource": "*"}], "Statement": [{"Effect": "Deny", "Action": "lambda:*", "Resource": "*"}],
} }
access_key = create_user_with_access_key_and_multiple_policies( access_key = create_user_with_access_key_and_multiple_policies(
user_name, inline_policy_document, attached_policy_document user_name,
inline_policy_document,
attached_policy_document,
attached_policy_name="policy1",
) )
create_group_with_multiple_policies_and_add_user( create_group_with_multiple_policies_and_add_user(
user_name, group_inline_policy_document, group_attached_policy_document user_name,
group_inline_policy_document,
group_attached_policy_document,
attached_policy_name="policy2",
) )
client = boto3.client( client = boto3.client(
"ec2", "ec2",

View File

@ -127,6 +127,22 @@ def test_list_locations():
assert response["Locations"][2]["LocationUri"] == "s3://my_bucket/dir" assert response["Locations"][2]["LocationUri"] == "s3://my_bucket/dir"
@mock_datasync
def test_delete_location():
client = boto3.client("datasync", region_name="us-east-1")
locations = create_locations(client, create_smb=True)
response = client.list_locations()
assert len(response["Locations"]) == 1
location_arn = locations["smb_arn"]
response = client.delete_location(LocationArn=location_arn)
response = client.list_locations()
assert len(response["Locations"]) == 0
with assert_raises(ClientError) as e:
response = client.delete_location(LocationArn=location_arn)
@mock_datasync @mock_datasync
def test_create_task(): def test_create_task():
client = boto3.client("datasync", region_name="us-east-1") client = boto3.client("datasync", region_name="us-east-1")
@ -208,6 +224,72 @@ def test_describe_task_not_exist():
client.describe_task(TaskArn="abc") client.describe_task(TaskArn="abc")
@mock_datasync
def test_update_task():
client = boto3.client("datasync", region_name="us-east-1")
locations = create_locations(client, create_s3=True, create_smb=True)
initial_name = "Initial_Name"
updated_name = "Updated_Name"
initial_options = {
"VerifyMode": "NONE",
"Atime": "BEST_EFFORT",
"Mtime": "PRESERVE",
}
updated_options = {
"VerifyMode": "POINT_IN_TIME_CONSISTENT",
"Atime": "BEST_EFFORT",
"Mtime": "PRESERVE",
}
response = client.create_task(
SourceLocationArn=locations["smb_arn"],
DestinationLocationArn=locations["s3_arn"],
Name=initial_name,
Options=initial_options,
)
task_arn = response["TaskArn"]
response = client.describe_task(TaskArn=task_arn)
assert response["TaskArn"] == task_arn
assert response["Name"] == initial_name
assert response["Options"] == initial_options
response = client.update_task(
TaskArn=task_arn, Name=updated_name, Options=updated_options
)
response = client.describe_task(TaskArn=task_arn)
assert response["TaskArn"] == task_arn
assert response["Name"] == updated_name
assert response["Options"] == updated_options
with assert_raises(ClientError) as e:
client.update_task(TaskArn="doesnt_exist")
@mock_datasync
def test_delete_task():
client = boto3.client("datasync", region_name="us-east-1")
locations = create_locations(client, create_s3=True, create_smb=True)
response = client.create_task(
SourceLocationArn=locations["smb_arn"],
DestinationLocationArn=locations["s3_arn"],
Name="task_name",
)
response = client.list_tasks()
assert len(response["Tasks"]) == 1
task_arn = response["Tasks"][0]["TaskArn"]
assert task_arn is not None
response = client.delete_task(TaskArn=task_arn)
response = client.list_tasks()
assert len(response["Tasks"]) == 0
with assert_raises(ClientError) as e:
response = client.delete_task(TaskArn=task_arn)
@mock_datasync @mock_datasync
def test_start_task_execution(): def test_start_task_execution():
client = boto3.client("datasync", region_name="us-east-1") client = boto3.client("datasync", region_name="us-east-1")
@ -261,6 +343,8 @@ def test_describe_task_execution():
Name="task_name", Name="task_name",
) )
task_arn = response["TaskArn"] task_arn = response["TaskArn"]
response = client.describe_task(TaskArn=task_arn)
assert response["Status"] == "AVAILABLE"
response = client.start_task_execution(TaskArn=task_arn) response = client.start_task_execution(TaskArn=task_arn)
task_execution_arn = response["TaskExecutionArn"] task_execution_arn = response["TaskExecutionArn"]
@ -270,26 +354,38 @@ def test_describe_task_execution():
response = client.describe_task_execution(TaskExecutionArn=task_execution_arn) response = client.describe_task_execution(TaskExecutionArn=task_execution_arn)
assert response["TaskExecutionArn"] == task_execution_arn assert response["TaskExecutionArn"] == task_execution_arn
assert response["Status"] == "INITIALIZING" assert response["Status"] == "INITIALIZING"
response = client.describe_task(TaskArn=task_arn)
assert response["Status"] == "RUNNING"
response = client.describe_task_execution(TaskExecutionArn=task_execution_arn) response = client.describe_task_execution(TaskExecutionArn=task_execution_arn)
assert response["TaskExecutionArn"] == task_execution_arn assert response["TaskExecutionArn"] == task_execution_arn
assert response["Status"] == "PREPARING" assert response["Status"] == "PREPARING"
response = client.describe_task(TaskArn=task_arn)
assert response["Status"] == "RUNNING"
response = client.describe_task_execution(TaskExecutionArn=task_execution_arn) response = client.describe_task_execution(TaskExecutionArn=task_execution_arn)
assert response["TaskExecutionArn"] == task_execution_arn assert response["TaskExecutionArn"] == task_execution_arn
assert response["Status"] == "TRANSFERRING" assert response["Status"] == "TRANSFERRING"
response = client.describe_task(TaskArn=task_arn)
assert response["Status"] == "RUNNING"
response = client.describe_task_execution(TaskExecutionArn=task_execution_arn) response = client.describe_task_execution(TaskExecutionArn=task_execution_arn)
assert response["TaskExecutionArn"] == task_execution_arn assert response["TaskExecutionArn"] == task_execution_arn
assert response["Status"] == "VERIFYING" assert response["Status"] == "VERIFYING"
response = client.describe_task(TaskArn=task_arn)
assert response["Status"] == "RUNNING"
response = client.describe_task_execution(TaskExecutionArn=task_execution_arn) response = client.describe_task_execution(TaskExecutionArn=task_execution_arn)
assert response["TaskExecutionArn"] == task_execution_arn assert response["TaskExecutionArn"] == task_execution_arn
assert response["Status"] == "SUCCESS" assert response["Status"] == "SUCCESS"
response = client.describe_task(TaskArn=task_arn)
assert response["Status"] == "AVAILABLE"
response = client.describe_task_execution(TaskExecutionArn=task_execution_arn) response = client.describe_task_execution(TaskExecutionArn=task_execution_arn)
assert response["TaskExecutionArn"] == task_execution_arn assert response["TaskExecutionArn"] == task_execution_arn
assert response["Status"] == "SUCCESS" assert response["Status"] == "SUCCESS"
response = client.describe_task(TaskArn=task_arn)
assert response["Status"] == "AVAILABLE"
@mock_datasync @mock_datasync
@ -317,11 +413,13 @@ def test_cancel_task_execution():
response = client.describe_task(TaskArn=task_arn) response = client.describe_task(TaskArn=task_arn)
assert response["CurrentTaskExecutionArn"] == task_execution_arn assert response["CurrentTaskExecutionArn"] == task_execution_arn
assert response["Status"] == "RUNNING"
response = client.cancel_task_execution(TaskExecutionArn=task_execution_arn) response = client.cancel_task_execution(TaskExecutionArn=task_execution_arn)
response = client.describe_task(TaskArn=task_arn) response = client.describe_task(TaskArn=task_arn)
assert "CurrentTaskExecutionArn" not in response assert "CurrentTaskExecutionArn" not in response
assert response["Status"] == "AVAILABLE"
response = client.describe_task_execution(TaskExecutionArn=task_execution_arn) response = client.describe_task_execution(TaskExecutionArn=task_execution_arn)
assert response["Status"] == "ERROR" assert response["Status"] == "ERROR"

View File

@ -3319,3 +3319,66 @@ def _create_user_table():
TableName="users", Item={"username": {"S": "user3"}, "foo": {"S": "bar"}} TableName="users", Item={"username": {"S": "user3"}, "foo": {"S": "bar"}}
) )
return client return client
@mock_dynamodb2
def test_update_item_if_original_value_is_none():
dynamo = boto3.resource("dynamodb", region_name="eu-central-1")
dynamo.create_table(
AttributeDefinitions=[{"AttributeName": "job_id", "AttributeType": "S"}],
TableName="origin-rbu-dev",
KeySchema=[{"AttributeName": "job_id", "KeyType": "HASH"}],
ProvisionedThroughput={"ReadCapacityUnits": 1, "WriteCapacityUnits": 1},
)
table = dynamo.Table("origin-rbu-dev")
table.put_item(Item={"job_id": "a", "job_name": None})
table.update_item(
Key={"job_id": "a"},
UpdateExpression="SET job_name = :output",
ExpressionAttributeValues={":output": "updated",},
)
table.scan()["Items"][0]["job_name"].should.equal("updated")
@mock_dynamodb2
def test_update_nested_item_if_original_value_is_none():
dynamo = boto3.resource("dynamodb", region_name="eu-central-1")
dynamo.create_table(
AttributeDefinitions=[{"AttributeName": "job_id", "AttributeType": "S"}],
TableName="origin-rbu-dev",
KeySchema=[{"AttributeName": "job_id", "KeyType": "HASH"}],
ProvisionedThroughput={"ReadCapacityUnits": 1, "WriteCapacityUnits": 1},
)
table = dynamo.Table("origin-rbu-dev")
table.put_item(Item={"job_id": "a", "job_details": {"job_name": None}})
table.update_item(
Key={"job_id": "a"},
UpdateExpression="SET job_details.job_name = :output",
ExpressionAttributeValues={":output": "updated",},
)
table.scan()["Items"][0]["job_details"]["job_name"].should.equal("updated")
@mock_dynamodb2
def test_allow_update_to_item_with_different_type():
dynamo = boto3.resource("dynamodb", region_name="eu-central-1")
dynamo.create_table(
AttributeDefinitions=[{"AttributeName": "job_id", "AttributeType": "S"}],
TableName="origin-rbu-dev",
KeySchema=[{"AttributeName": "job_id", "KeyType": "HASH"}],
ProvisionedThroughput={"ReadCapacityUnits": 1, "WriteCapacityUnits": 1},
)
table = dynamo.Table("origin-rbu-dev")
table.put_item(Item={"job_id": "a", "job_details": {"job_name": {"nested": "yes"}}})
table.put_item(Item={"job_id": "b", "job_details": {"job_name": {"nested": "yes"}}})
table.update_item(
Key={"job_id": "a"},
UpdateExpression="SET job_details.job_name = :output",
ExpressionAttributeValues={":output": "updated"},
)
table.get_item(Key={"job_id": "a"})["Item"]["job_details"][
"job_name"
].should.be.equal("updated")
table.get_item(Key={"job_id": "b"})["Item"]["job_details"][
"job_name"
].should.be.equal({"nested": "yes"})

View File

@ -11,6 +11,7 @@ from boto.exception import EC2ResponseError
from botocore.exceptions import ParamValidationError, ClientError from botocore.exceptions import ParamValidationError, ClientError
import json import json
import sure # noqa import sure # noqa
import random
from moto import mock_cloudformation_deprecated, mock_ec2, mock_ec2_deprecated from moto import mock_cloudformation_deprecated, mock_ec2, mock_ec2_deprecated
@ -474,3 +475,127 @@ def test_create_subnets_with_overlapping_cidr_blocks():
subnet_cidr_block subnet_cidr_block
) )
) )
@mock_ec2
def test_available_ip_addresses_in_subnet():
ec2 = boto3.resource("ec2", region_name="us-west-1")
client = boto3.client("ec2", region_name="us-west-1")
vpc = ec2.create_vpc(CidrBlock="10.0.0.0/16")
cidr_range_addresses = [
("10.0.0.0/16", 65531),
("10.0.0.0/17", 32763),
("10.0.0.0/18", 16379),
("10.0.0.0/19", 8187),
("10.0.0.0/20", 4091),
("10.0.0.0/21", 2043),
("10.0.0.0/22", 1019),
("10.0.0.0/23", 507),
("10.0.0.0/24", 251),
("10.0.0.0/25", 123),
("10.0.0.0/26", 59),
("10.0.0.0/27", 27),
("10.0.0.0/28", 11),
]
for (cidr, expected_count) in cidr_range_addresses:
validate_subnet_details(client, vpc, cidr, expected_count)
@mock_ec2
def test_available_ip_addresses_in_subnet_with_enis():
ec2 = boto3.resource("ec2", region_name="us-west-1")
client = boto3.client("ec2", region_name="us-west-1")
vpc = ec2.create_vpc(CidrBlock="10.0.0.0/16")
# Verify behaviour for various CIDR ranges (...)
# Don't try to assign ENIs to /27 and /28, as there are not a lot of IP addresses to go around
cidr_range_addresses = [
("10.0.0.0/16", 65531),
("10.0.0.0/17", 32763),
("10.0.0.0/18", 16379),
("10.0.0.0/19", 8187),
("10.0.0.0/20", 4091),
("10.0.0.0/21", 2043),
("10.0.0.0/22", 1019),
("10.0.0.0/23", 507),
("10.0.0.0/24", 251),
("10.0.0.0/25", 123),
("10.0.0.0/26", 59),
]
for (cidr, expected_count) in cidr_range_addresses:
validate_subnet_details_after_creating_eni(client, vpc, cidr, expected_count)
def validate_subnet_details(client, vpc, cidr, expected_ip_address_count):
subnet = client.create_subnet(
VpcId=vpc.id, CidrBlock=cidr, AvailabilityZone="us-west-1b"
)["Subnet"]
subnet["AvailableIpAddressCount"].should.equal(expected_ip_address_count)
client.delete_subnet(SubnetId=subnet["SubnetId"])
def validate_subnet_details_after_creating_eni(
client, vpc, cidr, expected_ip_address_count
):
subnet = client.create_subnet(
VpcId=vpc.id, CidrBlock=cidr, AvailabilityZone="us-west-1b"
)["Subnet"]
# Create a random number of Elastic Network Interfaces
nr_of_eni_to_create = random.randint(0, 5)
ip_addresses_assigned = 0
enis_created = []
for i in range(0, nr_of_eni_to_create):
# Create a random number of IP addresses per ENI
nr_of_ip_addresses = random.randint(1, 5)
if nr_of_ip_addresses == 1:
# Pick the first available IP address (First 4 are reserved by AWS)
private_address = "10.0.0." + str(ip_addresses_assigned + 4)
eni = client.create_network_interface(
SubnetId=subnet["SubnetId"], PrivateIpAddress=private_address
)["NetworkInterface"]
enis_created.append(eni)
ip_addresses_assigned = ip_addresses_assigned + 1
else:
# Assign a list of IP addresses
private_addresses = [
"10.0.0." + str(4 + ip_addresses_assigned + i)
for i in range(0, nr_of_ip_addresses)
]
eni = client.create_network_interface(
SubnetId=subnet["SubnetId"],
PrivateIpAddresses=[
{"PrivateIpAddress": address} for address in private_addresses
],
)["NetworkInterface"]
enis_created.append(eni)
ip_addresses_assigned = ip_addresses_assigned + nr_of_ip_addresses + 1 #
# Verify that the nr of available IP addresses takes these ENIs into account
updated_subnet = client.describe_subnets(SubnetIds=[subnet["SubnetId"]])["Subnets"][
0
]
private_addresses = [
eni["PrivateIpAddress"] for eni in enis_created if eni["PrivateIpAddress"]
]
for eni in enis_created:
private_addresses.extend(
[address["PrivateIpAddress"] for address in eni["PrivateIpAddresses"]]
)
error_msg = (
"Nr of IP addresses for Subnet with CIDR {0} is incorrect. Expected: {1}, Actual: {2}. "
"Addresses: {3}"
)
with sure.ensure(
error_msg,
cidr,
str(expected_ip_address_count),
updated_subnet["AvailableIpAddressCount"],
str(private_addresses),
):
updated_subnet["AvailableIpAddressCount"].should.equal(
expected_ip_address_count - ip_addresses_assigned
)
# Clean up, as we have to create a few more subnets that shouldn't interfere with each other
for eni in enis_created:
client.delete_network_interface(NetworkInterfaceId=eni["NetworkInterfaceId"])
client.delete_subnet(SubnetId=subnet["SubnetId"])

View File

@ -678,3 +678,150 @@ def test_create_vpc_with_invalid_cidr_range():
"An error occurred (InvalidVpc.Range) when calling the CreateVpc " "An error occurred (InvalidVpc.Range) when calling the CreateVpc "
"operation: The CIDR '{}' is invalid.".format(vpc_cidr_block) "operation: The CIDR '{}' is invalid.".format(vpc_cidr_block)
) )
@mock_ec2
def test_enable_vpc_classic_link():
ec2 = boto3.resource("ec2", region_name="us-west-1")
# Create VPC
vpc = ec2.create_vpc(CidrBlock="10.1.0.0/16")
response = ec2.meta.client.enable_vpc_classic_link(VpcId=vpc.id)
assert response.get("Return").should.be.true
@mock_ec2
def test_enable_vpc_classic_link_failure():
ec2 = boto3.resource("ec2", region_name="us-west-1")
# Create VPC
vpc = ec2.create_vpc(CidrBlock="10.90.0.0/16")
response = ec2.meta.client.enable_vpc_classic_link(VpcId=vpc.id)
assert response.get("Return").should.be.false
@mock_ec2
def test_disable_vpc_classic_link():
ec2 = boto3.resource("ec2", region_name="us-west-1")
# Create VPC
vpc = ec2.create_vpc(CidrBlock="10.0.0.0/16")
ec2.meta.client.enable_vpc_classic_link(VpcId=vpc.id)
response = ec2.meta.client.disable_vpc_classic_link(VpcId=vpc.id)
assert response.get("Return").should.be.false
@mock_ec2
def test_describe_classic_link_enabled():
ec2 = boto3.resource("ec2", region_name="us-west-1")
# Create VPC
vpc = ec2.create_vpc(CidrBlock="10.0.0.0/16")
ec2.meta.client.enable_vpc_classic_link(VpcId=vpc.id)
response = ec2.meta.client.describe_vpc_classic_link(VpcIds=[vpc.id])
assert response.get("Vpcs")[0].get("ClassicLinkEnabled").should.be.true
@mock_ec2
def test_describe_classic_link_disabled():
ec2 = boto3.resource("ec2", region_name="us-west-1")
# Create VPC
vpc = ec2.create_vpc(CidrBlock="10.90.0.0/16")
response = ec2.meta.client.describe_vpc_classic_link(VpcIds=[vpc.id])
assert response.get("Vpcs")[0].get("ClassicLinkEnabled").should.be.false
@mock_ec2
def test_describe_classic_link_multiple():
ec2 = boto3.resource("ec2", region_name="us-west-1")
# Create VPC
vpc1 = ec2.create_vpc(CidrBlock="10.90.0.0/16")
vpc2 = ec2.create_vpc(CidrBlock="10.0.0.0/16")
ec2.meta.client.enable_vpc_classic_link(VpcId=vpc2.id)
response = ec2.meta.client.describe_vpc_classic_link(VpcIds=[vpc1.id, vpc2.id])
expected = [
{"VpcId": vpc1.id, "ClassicLinkDnsSupported": False},
{"VpcId": vpc2.id, "ClassicLinkDnsSupported": True},
]
# Ensure response is sorted, because they can come in random order
assert response.get("Vpcs").sort(key=lambda x: x["VpcId"]) == expected.sort(
key=lambda x: x["VpcId"]
)
@mock_ec2
def test_enable_vpc_classic_link_dns_support():
ec2 = boto3.resource("ec2", region_name="us-west-1")
# Create VPC
vpc = ec2.create_vpc(CidrBlock="10.1.0.0/16")
response = ec2.meta.client.enable_vpc_classic_link_dns_support(VpcId=vpc.id)
assert response.get("Return").should.be.true
@mock_ec2
def test_disable_vpc_classic_link_dns_support():
ec2 = boto3.resource("ec2", region_name="us-west-1")
# Create VPC
vpc = ec2.create_vpc(CidrBlock="10.0.0.0/16")
ec2.meta.client.enable_vpc_classic_link_dns_support(VpcId=vpc.id)
response = ec2.meta.client.disable_vpc_classic_link_dns_support(VpcId=vpc.id)
assert response.get("Return").should.be.false
@mock_ec2
def test_describe_classic_link_dns_support_enabled():
ec2 = boto3.resource("ec2", region_name="us-west-1")
# Create VPC
vpc = ec2.create_vpc(CidrBlock="10.0.0.0/16")
ec2.meta.client.enable_vpc_classic_link_dns_support(VpcId=vpc.id)
response = ec2.meta.client.describe_vpc_classic_link_dns_support(VpcIds=[vpc.id])
assert response.get("Vpcs")[0].get("ClassicLinkDnsSupported").should.be.true
@mock_ec2
def test_describe_classic_link_dns_support_disabled():
ec2 = boto3.resource("ec2", region_name="us-west-1")
# Create VPC
vpc = ec2.create_vpc(CidrBlock="10.90.0.0/16")
response = ec2.meta.client.describe_vpc_classic_link_dns_support(VpcIds=[vpc.id])
assert response.get("Vpcs")[0].get("ClassicLinkDnsSupported").should.be.false
@mock_ec2
def test_describe_classic_link_dns_support_multiple():
ec2 = boto3.resource("ec2", region_name="us-west-1")
# Create VPC
vpc1 = ec2.create_vpc(CidrBlock="10.90.0.0/16")
vpc2 = ec2.create_vpc(CidrBlock="10.0.0.0/16")
ec2.meta.client.enable_vpc_classic_link_dns_support(VpcId=vpc2.id)
response = ec2.meta.client.describe_vpc_classic_link_dns_support(
VpcIds=[vpc1.id, vpc2.id]
)
expected = [
{"VpcId": vpc1.id, "ClassicLinkDnsSupported": False},
{"VpcId": vpc2.id, "ClassicLinkDnsSupported": True},
]
# Ensure response is sorted, because they can come in random order
assert response.get("Vpcs").sort(key=lambda x: x["VpcId"]) == expected.sort(
key=lambda x: x["VpcId"]
)

View File

@ -1,6 +1,7 @@
import random import random
import boto3 import boto3
import json import json
import sure # noqa
from moto.events import mock_events from moto.events import mock_events
from botocore.exceptions import ClientError from botocore.exceptions import ClientError
@ -204,6 +205,53 @@ def test_permissions():
assert resp_policy["Statement"][0]["Sid"] == "Account1" assert resp_policy["Statement"][0]["Sid"] == "Account1"
@mock_events
def test_put_permission_errors():
client = boto3.client("events", "us-east-1")
client.create_event_bus(Name="test-bus")
client.put_permission.when.called_with(
EventBusName="non-existing",
Action="events:PutEvents",
Principal="111111111111",
StatementId="test",
).should.throw(ClientError, "Event bus non-existing does not exist.")
client.put_permission.when.called_with(
EventBusName="test-bus",
Action="events:PutPermission",
Principal="111111111111",
StatementId="test",
).should.throw(
ClientError, "Provided value in parameter 'action' is not supported."
)
@mock_events
def test_remove_permission_errors():
client = boto3.client("events", "us-east-1")
client.create_event_bus(Name="test-bus")
client.remove_permission.when.called_with(
EventBusName="non-existing", StatementId="test"
).should.throw(ClientError, "Event bus non-existing does not exist.")
client.remove_permission.when.called_with(
EventBusName="test-bus", StatementId="test"
).should.throw(ClientError, "EventBus does not have a policy.")
client.put_permission(
EventBusName="test-bus",
Action="events:PutEvents",
Principal="111111111111",
StatementId="test",
)
client.remove_permission.when.called_with(
EventBusName="test-bus", StatementId="non-existing"
).should.throw(ClientError, "Statement with the provided id does not exist.")
@mock_events @mock_events
def test_put_events(): def test_put_events():
client = boto3.client("events", "eu-central-1") client = boto3.client("events", "eu-central-1")
@ -220,3 +268,177 @@ def test_put_events():
with assert_raises(ClientError): with assert_raises(ClientError):
client.put_events(Entries=[event] * 20) client.put_events(Entries=[event] * 20)
@mock_events
def test_create_event_bus():
client = boto3.client("events", "us-east-1")
response = client.create_event_bus(Name="test-bus")
response["EventBusArn"].should.equal(
"arn:aws:events:us-east-1:123456789012:event-bus/test-bus"
)
@mock_events
def test_create_event_bus_errors():
client = boto3.client("events", "us-east-1")
client.create_event_bus(Name="test-bus")
client.create_event_bus.when.called_with(Name="test-bus").should.throw(
ClientError, "Event bus test-bus already exists."
)
# the 'default' name is already used for the account's default event bus.
client.create_event_bus.when.called_with(Name="default").should.throw(
ClientError, "Event bus default already exists."
)
# non partner event buses can't contain the '/' character
client.create_event_bus.when.called_with(Name="test/test-bus").should.throw(
ClientError, "Event bus name must not contain '/'."
)
client.create_event_bus.when.called_with(
Name="aws.partner/test/test-bus", EventSourceName="aws.partner/test/test-bus"
).should.throw(
ClientError, "Event source aws.partner/test/test-bus does not exist."
)
@mock_events
def test_describe_event_bus():
client = boto3.client("events", "us-east-1")
response = client.describe_event_bus()
response["Name"].should.equal("default")
response["Arn"].should.equal(
"arn:aws:events:us-east-1:123456789012:event-bus/default"
)
response.should_not.have.key("Policy")
client.create_event_bus(Name="test-bus")
client.put_permission(
EventBusName="test-bus",
Action="events:PutEvents",
Principal="111111111111",
StatementId="test",
)
response = client.describe_event_bus(Name="test-bus")
response["Name"].should.equal("test-bus")
response["Arn"].should.equal(
"arn:aws:events:us-east-1:123456789012:event-bus/test-bus"
)
json.loads(response["Policy"]).should.equal(
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "test",
"Effect": "Allow",
"Principal": {"AWS": "arn:aws:iam::111111111111:root"},
"Action": "events:PutEvents",
"Resource": "arn:aws:events:us-east-1:123456789012:event-bus/test-bus",
}
],
}
)
@mock_events
def test_describe_event_bus_errors():
client = boto3.client("events", "us-east-1")
client.describe_event_bus.when.called_with(Name="non-existing").should.throw(
ClientError, "Event bus non-existing does not exist."
)
@mock_events
def test_list_event_buses():
client = boto3.client("events", "us-east-1")
client.create_event_bus(Name="test-bus-1")
client.create_event_bus(Name="test-bus-2")
client.create_event_bus(Name="other-bus-1")
client.create_event_bus(Name="other-bus-2")
response = client.list_event_buses()
response["EventBuses"].should.have.length_of(5)
sorted(response["EventBuses"], key=lambda i: i["Name"]).should.equal(
[
{
"Name": "default",
"Arn": "arn:aws:events:us-east-1:123456789012:event-bus/default",
},
{
"Name": "other-bus-1",
"Arn": "arn:aws:events:us-east-1:123456789012:event-bus/other-bus-1",
},
{
"Name": "other-bus-2",
"Arn": "arn:aws:events:us-east-1:123456789012:event-bus/other-bus-2",
},
{
"Name": "test-bus-1",
"Arn": "arn:aws:events:us-east-1:123456789012:event-bus/test-bus-1",
},
{
"Name": "test-bus-2",
"Arn": "arn:aws:events:us-east-1:123456789012:event-bus/test-bus-2",
},
]
)
response = client.list_event_buses(NamePrefix="other-bus")
response["EventBuses"].should.have.length_of(2)
sorted(response["EventBuses"], key=lambda i: i["Name"]).should.equal(
[
{
"Name": "other-bus-1",
"Arn": "arn:aws:events:us-east-1:123456789012:event-bus/other-bus-1",
},
{
"Name": "other-bus-2",
"Arn": "arn:aws:events:us-east-1:123456789012:event-bus/other-bus-2",
},
]
)
@mock_events
def test_delete_event_bus():
client = boto3.client("events", "us-east-1")
client.create_event_bus(Name="test-bus")
response = client.list_event_buses()
response["EventBuses"].should.have.length_of(2)
client.delete_event_bus(Name="test-bus")
response = client.list_event_buses()
response["EventBuses"].should.have.length_of(1)
response["EventBuses"].should.equal(
[
{
"Name": "default",
"Arn": "arn:aws:events:us-east-1:123456789012:event-bus/default",
}
]
)
# deleting non existing event bus should be successful
client.delete_event_bus(Name="non-existing")
@mock_events
def test_delete_event_bus_errors():
client = boto3.client("events", "us-east-1")
client.delete_event_bus.when.called_with(Name="default").should.throw(
ClientError, "Cannot delete event bus default."
)

View File

@ -18,6 +18,7 @@ from nose.tools import raises
from datetime import datetime from datetime import datetime
from tests.helpers import requires_boto_gte from tests.helpers import requires_boto_gte
from uuid import uuid4
MOCK_CERT = """-----BEGIN CERTIFICATE----- MOCK_CERT = """-----BEGIN CERTIFICATE-----
@ -169,6 +170,14 @@ def test_create_role_and_instance_profile():
profile.path.should.equal("/") profile.path.should.equal("/")
@mock_iam
def test_create_instance_profile_should_throw_when_name_is_not_unique():
conn = boto3.client("iam", region_name="us-east-1")
conn.create_instance_profile(InstanceProfileName="unique-instance-profile")
with assert_raises(ClientError):
conn.create_instance_profile(InstanceProfileName="unique-instance-profile")
@mock_iam_deprecated() @mock_iam_deprecated()
def test_remove_role_from_instance_profile(): def test_remove_role_from_instance_profile():
conn = boto.connect_iam() conn = boto.connect_iam()
@ -400,6 +409,21 @@ def test_create_policy():
) )
@mock_iam
def test_create_policy_already_exists():
conn = boto3.client("iam", region_name="us-east-1")
response = conn.create_policy(
PolicyName="TestCreatePolicy", PolicyDocument=MOCK_POLICY
)
with assert_raises(conn.exceptions.EntityAlreadyExistsException) as ex:
response = conn.create_policy(
PolicyName="TestCreatePolicy", PolicyDocument=MOCK_POLICY
)
ex.exception.response["Error"]["Code"].should.equal("EntityAlreadyExists")
ex.exception.response["ResponseMetadata"]["HTTPStatusCode"].should.equal(409)
ex.exception.response["Error"]["Message"].should.contain("TestCreatePolicy")
@mock_iam @mock_iam
def test_delete_policy(): def test_delete_policy():
conn = boto3.client("iam", region_name="us-east-1") conn = boto3.client("iam", region_name="us-east-1")
@ -1292,6 +1316,122 @@ def test_get_access_key_last_used():
resp["UserName"].should.equal(create_key_response["UserName"]) resp["UserName"].should.equal(create_key_response["UserName"])
@mock_iam
def test_upload_ssh_public_key():
iam = boto3.resource("iam", region_name="us-east-1")
client = iam.meta.client
username = "test-user"
iam.create_user(UserName=username)
public_key = MOCK_CERT
resp = client.upload_ssh_public_key(UserName=username, SSHPublicKeyBody=public_key)
pubkey = resp["SSHPublicKey"]
pubkey["SSHPublicKeyBody"].should.equal(public_key)
pubkey["UserName"].should.equal(username)
pubkey["SSHPublicKeyId"].should.have.length_of(20)
assert pubkey["SSHPublicKeyId"].startswith("APKA")
pubkey.should.have.key("Fingerprint")
pubkey["Status"].should.equal("Active")
(
datetime.utcnow() - pubkey["UploadDate"].replace(tzinfo=None)
).seconds.should.be.within(0, 10)
@mock_iam
def test_get_ssh_public_key():
iam = boto3.resource("iam", region_name="us-east-1")
client = iam.meta.client
username = "test-user"
iam.create_user(UserName=username)
public_key = MOCK_CERT
with assert_raises(ClientError):
client.get_ssh_public_key(
UserName=username, SSHPublicKeyId="xxnon-existent-keyxx", Encoding="SSH"
)
resp = client.upload_ssh_public_key(UserName=username, SSHPublicKeyBody=public_key)
ssh_public_key_id = resp["SSHPublicKey"]["SSHPublicKeyId"]
resp = client.get_ssh_public_key(
UserName=username, SSHPublicKeyId=ssh_public_key_id, Encoding="SSH"
)
resp["SSHPublicKey"]["SSHPublicKeyBody"].should.equal(public_key)
@mock_iam
def test_list_ssh_public_keys():
iam = boto3.resource("iam", region_name="us-east-1")
client = iam.meta.client
username = "test-user"
iam.create_user(UserName=username)
public_key = MOCK_CERT
resp = client.list_ssh_public_keys(UserName=username)
resp["SSHPublicKeys"].should.have.length_of(0)
resp = client.upload_ssh_public_key(UserName=username, SSHPublicKeyBody=public_key)
ssh_public_key_id = resp["SSHPublicKey"]["SSHPublicKeyId"]
resp = client.list_ssh_public_keys(UserName=username)
resp["SSHPublicKeys"].should.have.length_of(1)
resp["SSHPublicKeys"][0]["SSHPublicKeyId"].should.equal(ssh_public_key_id)
@mock_iam
def test_update_ssh_public_key():
iam = boto3.resource("iam", region_name="us-east-1")
client = iam.meta.client
username = "test-user"
iam.create_user(UserName=username)
public_key = MOCK_CERT
with assert_raises(ClientError):
client.update_ssh_public_key(
UserName=username, SSHPublicKeyId="xxnon-existent-keyxx", Status="Inactive"
)
resp = client.upload_ssh_public_key(UserName=username, SSHPublicKeyBody=public_key)
ssh_public_key_id = resp["SSHPublicKey"]["SSHPublicKeyId"]
resp["SSHPublicKey"]["Status"].should.equal("Active")
resp = client.update_ssh_public_key(
UserName=username, SSHPublicKeyId=ssh_public_key_id, Status="Inactive"
)
resp = client.get_ssh_public_key(
UserName=username, SSHPublicKeyId=ssh_public_key_id, Encoding="SSH"
)
resp["SSHPublicKey"]["Status"].should.equal("Inactive")
@mock_iam
def test_delete_ssh_public_key():
iam = boto3.resource("iam", region_name="us-east-1")
client = iam.meta.client
username = "test-user"
iam.create_user(UserName=username)
public_key = MOCK_CERT
with assert_raises(ClientError):
client.delete_ssh_public_key(
UserName=username, SSHPublicKeyId="xxnon-existent-keyxx"
)
resp = client.upload_ssh_public_key(UserName=username, SSHPublicKeyBody=public_key)
ssh_public_key_id = resp["SSHPublicKey"]["SSHPublicKeyId"]
resp = client.list_ssh_public_keys(UserName=username)
resp["SSHPublicKeys"].should.have.length_of(1)
resp = client.delete_ssh_public_key(
UserName=username, SSHPublicKeyId=ssh_public_key_id
)
resp = client.list_ssh_public_keys(UserName=username)
resp["SSHPublicKeys"].should.have.length_of(0)
@mock_iam @mock_iam
def test_get_account_authorization_details(): def test_get_account_authorization_details():
test_policy = json.dumps( test_policy = json.dumps(
@ -2027,6 +2167,42 @@ def test_create_role_with_permissions_boundary():
conn.list_roles().get("Roles")[0].get("PermissionsBoundary").should.equal(expected) conn.list_roles().get("Roles")[0].get("PermissionsBoundary").should.equal(expected)
@mock_iam
def test_create_role_with_same_name_should_fail():
iam = boto3.client("iam", region_name="us-east-1")
test_role_name = str(uuid4())
iam.create_role(
RoleName=test_role_name, AssumeRolePolicyDocument="policy", Description="test"
)
# Create the role again, and verify that it fails
with assert_raises(ClientError) as err:
iam.create_role(
RoleName=test_role_name,
AssumeRolePolicyDocument="policy",
Description="test",
)
err.exception.response["Error"]["Code"].should.equal("EntityAlreadyExists")
err.exception.response["Error"]["Message"].should.equal(
"Role with name {0} already exists.".format(test_role_name)
)
@mock_iam
def test_create_policy_with_same_name_should_fail():
iam = boto3.client("iam", region_name="us-east-1")
test_policy_name = str(uuid4())
policy = iam.create_policy(PolicyName=test_policy_name, PolicyDocument=MOCK_POLICY)
# Create the role again, and verify that it fails
with assert_raises(ClientError) as err:
iam.create_policy(PolicyName=test_policy_name, PolicyDocument=MOCK_POLICY)
err.exception.response["Error"]["Code"].should.equal("EntityAlreadyExists")
err.exception.response["Error"]["Message"].should.equal(
"A policy called {0} already exists. Duplicate names are not allowed.".format(
test_policy_name
)
)
@mock_iam @mock_iam
def test_create_open_id_connect_provider(): def test_create_open_id_connect_provider():
client = boto3.client("iam", region_name="us-east-1") client = boto3.client("iam", region_name="us-east-1")
@ -2302,3 +2478,123 @@ def test_delete_account_password_policy_errors():
client.delete_account_password_policy.when.called_with().should.throw( client.delete_account_password_policy.when.called_with().should.throw(
ClientError, "The account policy with name PasswordPolicy cannot be found." ClientError, "The account policy with name PasswordPolicy cannot be found."
) )
@mock_iam
def test_get_account_summary():
client = boto3.client("iam", region_name="us-east-1")
iam = boto3.resource("iam", region_name="us-east-1")
account_summary = iam.AccountSummary()
account_summary.summary_map.should.equal(
{
"GroupPolicySizeQuota": 5120,
"InstanceProfilesQuota": 1000,
"Policies": 0,
"GroupsPerUserQuota": 10,
"InstanceProfiles": 0,
"AttachedPoliciesPerUserQuota": 10,
"Users": 0,
"PoliciesQuota": 1500,
"Providers": 0,
"AccountMFAEnabled": 0,
"AccessKeysPerUserQuota": 2,
"AssumeRolePolicySizeQuota": 2048,
"PolicyVersionsInUseQuota": 10000,
"GlobalEndpointTokenVersion": 1,
"VersionsPerPolicyQuota": 5,
"AttachedPoliciesPerGroupQuota": 10,
"PolicySizeQuota": 6144,
"Groups": 0,
"AccountSigningCertificatesPresent": 0,
"UsersQuota": 5000,
"ServerCertificatesQuota": 20,
"MFADevices": 0,
"UserPolicySizeQuota": 2048,
"PolicyVersionsInUse": 0,
"ServerCertificates": 0,
"Roles": 0,
"RolesQuota": 1000,
"SigningCertificatesPerUserQuota": 2,
"MFADevicesInUse": 0,
"RolePolicySizeQuota": 10240,
"AttachedPoliciesPerRoleQuota": 10,
"AccountAccessKeysPresent": 0,
"GroupsQuota": 300,
}
)
client.create_instance_profile(InstanceProfileName="test-profile")
client.create_open_id_connect_provider(
Url="https://example.com", ThumbprintList=[],
)
response_policy = client.create_policy(
PolicyName="test-policy", PolicyDocument=MOCK_POLICY
)
client.create_role(RoleName="test-role", AssumeRolePolicyDocument="test policy")
client.attach_role_policy(
RoleName="test-role", PolicyArn=response_policy["Policy"]["Arn"]
)
client.create_saml_provider(
Name="TestSAMLProvider", SAMLMetadataDocument="a" * 1024
)
client.create_group(GroupName="test-group")
client.attach_group_policy(
GroupName="test-group", PolicyArn=response_policy["Policy"]["Arn"]
)
client.create_user(UserName="test-user")
client.attach_user_policy(
UserName="test-user", PolicyArn=response_policy["Policy"]["Arn"]
)
client.enable_mfa_device(
UserName="test-user",
SerialNumber="123456789",
AuthenticationCode1="234567",
AuthenticationCode2="987654",
)
client.create_virtual_mfa_device(VirtualMFADeviceName="test-device")
client.upload_server_certificate(
ServerCertificateName="test-cert",
CertificateBody="cert-body",
PrivateKey="private-key",
)
account_summary.load()
account_summary.summary_map.should.equal(
{
"GroupPolicySizeQuota": 5120,
"InstanceProfilesQuota": 1000,
"Policies": 1,
"GroupsPerUserQuota": 10,
"InstanceProfiles": 1,
"AttachedPoliciesPerUserQuota": 10,
"Users": 1,
"PoliciesQuota": 1500,
"Providers": 2,
"AccountMFAEnabled": 0,
"AccessKeysPerUserQuota": 2,
"AssumeRolePolicySizeQuota": 2048,
"PolicyVersionsInUseQuota": 10000,
"GlobalEndpointTokenVersion": 1,
"VersionsPerPolicyQuota": 5,
"AttachedPoliciesPerGroupQuota": 10,
"PolicySizeQuota": 6144,
"Groups": 1,
"AccountSigningCertificatesPresent": 0,
"UsersQuota": 5000,
"ServerCertificatesQuota": 20,
"MFADevices": 1,
"UserPolicySizeQuota": 2048,
"PolicyVersionsInUse": 3,
"ServerCertificates": 1,
"Roles": 1,
"RolesQuota": 1000,
"SigningCertificatesPerUserQuota": 2,
"MFADevicesInUse": 1,
"RolePolicySizeQuota": 10240,
"AttachedPoliciesPerRoleQuota": 10,
"AccountAccessKeysPresent": 0,
"GroupsQuota": 300,
}
)

View File

@ -8,6 +8,7 @@ import sure # noqa
from nose.tools import assert_raises from nose.tools import assert_raises
from boto.exception import BotoServerError from boto.exception import BotoServerError
from botocore.exceptions import ClientError
from moto import mock_iam, mock_iam_deprecated from moto import mock_iam, mock_iam_deprecated
MOCK_POLICY = """ MOCK_POLICY = """
@ -182,3 +183,25 @@ def test_list_group_policies():
conn.list_group_policies(GroupName="my-group")["PolicyNames"].should.equal( conn.list_group_policies(GroupName="my-group")["PolicyNames"].should.equal(
["my-policy"] ["my-policy"]
) )
@mock_iam
def test_delete_group():
conn = boto3.client("iam", region_name="us-east-1")
conn.create_group(GroupName="my-group")
groups = conn.list_groups()
assert groups["Groups"][0]["GroupName"] == "my-group"
assert len(groups["Groups"]) == 1
conn.delete_group(GroupName="my-group")
conn.list_groups()["Groups"].should.be.empty
@mock_iam
def test_delete_unknown_group():
conn = boto3.client("iam", region_name="us-east-1")
with assert_raises(ClientError) as err:
conn.delete_group(GroupName="unknown-group")
err.exception.response["Error"]["Code"].should.equal("NoSuchEntity")
err.exception.response["Error"]["Message"].should.equal(
"The group with name unknown-group cannot be found."
)

View File

@ -159,6 +159,17 @@ def test_create_account():
create_status["AccountName"].should.equal(mockname) create_status["AccountName"].should.equal(mockname)
@mock_organizations
def test_describe_create_account_status():
client = boto3.client("organizations", region_name="us-east-1")
client.create_organization(FeatureSet="ALL")["Organization"]
request_id = client.create_account(AccountName=mockname, Email=mockemail)[
"CreateAccountStatus"
]["Id"]
response = client.describe_create_account_status(CreateAccountRequestId=request_id)
validate_create_account_status(response["CreateAccountStatus"])
@mock_organizations @mock_organizations
def test_describe_account(): def test_describe_account():
client = boto3.client("organizations", region_name="us-east-1") client = boto3.client("organizations", region_name="us-east-1")

View File

@ -45,7 +45,7 @@ def test_get_secret_that_does_not_exist():
result = conn.get_secret_value(SecretId="i-dont-exist") result = conn.get_secret_value(SecretId="i-dont-exist")
assert_equal( assert_equal(
"Secrets Manager can\u2019t find the specified secret.", "Secrets Manager can't find the specified secret.",
cm.exception.response["Error"]["Message"], cm.exception.response["Error"]["Message"],
) )
@ -61,7 +61,7 @@ def test_get_secret_that_does_not_match():
result = conn.get_secret_value(SecretId="i-dont-match") result = conn.get_secret_value(SecretId="i-dont-match")
assert_equal( assert_equal(
"Secrets Manager can\u2019t find the specified secret.", "Secrets Manager can't find the specified secret.",
cm.exception.response["Error"]["Message"], cm.exception.response["Error"]["Message"],
) )
@ -88,7 +88,7 @@ def test_get_secret_that_has_no_value():
result = conn.get_secret_value(SecretId="java-util-test-password") result = conn.get_secret_value(SecretId="java-util-test-password")
assert_equal( assert_equal(
"Secrets Manager can\u2019t find the specified secret value for staging label: AWSCURRENT", "Secrets Manager can't find the specified secret value for staging label: AWSCURRENT",
cm.exception.response["Error"]["Message"], cm.exception.response["Error"]["Message"],
) )

View File

@ -48,9 +48,7 @@ def test_get_secret_that_does_not_exist():
headers={"X-Amz-Target": "secretsmanager.GetSecretValue"}, headers={"X-Amz-Target": "secretsmanager.GetSecretValue"},
) )
json_data = json.loads(get_secret.data.decode("utf-8")) json_data = json.loads(get_secret.data.decode("utf-8"))
assert ( assert json_data["message"] == "Secrets Manager can't find the specified secret."
json_data["message"] == "Secrets Manager can\u2019t find the specified secret."
)
assert json_data["__type"] == "ResourceNotFoundException" assert json_data["__type"] == "ResourceNotFoundException"
@ -70,9 +68,7 @@ def test_get_secret_that_does_not_match():
headers={"X-Amz-Target": "secretsmanager.GetSecretValue"}, headers={"X-Amz-Target": "secretsmanager.GetSecretValue"},
) )
json_data = json.loads(get_secret.data.decode("utf-8")) json_data = json.loads(get_secret.data.decode("utf-8"))
assert ( assert json_data["message"] == "Secrets Manager can't find the specified secret."
json_data["message"] == "Secrets Manager can\u2019t find the specified secret."
)
assert json_data["__type"] == "ResourceNotFoundException" assert json_data["__type"] == "ResourceNotFoundException"
@ -95,7 +91,7 @@ def test_get_secret_that_has_no_value():
json_data = json.loads(get_secret.data.decode("utf-8")) json_data = json.loads(get_secret.data.decode("utf-8"))
assert ( assert (
json_data["message"] json_data["message"]
== "Secrets Manager can\u2019t find the specified secret value for staging label: AWSCURRENT" == "Secrets Manager can't find the specified secret value for staging label: AWSCURRENT"
) )
assert json_data["__type"] == "ResourceNotFoundException" assert json_data["__type"] == "ResourceNotFoundException"
@ -178,9 +174,7 @@ def test_describe_secret_that_does_not_exist():
) )
json_data = json.loads(describe_secret.data.decode("utf-8")) json_data = json.loads(describe_secret.data.decode("utf-8"))
assert ( assert json_data["message"] == "Secrets Manager can't find the specified secret."
json_data["message"] == "Secrets Manager can\u2019t find the specified secret."
)
assert json_data["__type"] == "ResourceNotFoundException" assert json_data["__type"] == "ResourceNotFoundException"
@ -202,9 +196,7 @@ def test_describe_secret_that_does_not_match():
) )
json_data = json.loads(describe_secret.data.decode("utf-8")) json_data = json.loads(describe_secret.data.decode("utf-8"))
assert ( assert json_data["message"] == "Secrets Manager can't find the specified secret."
json_data["message"] == "Secrets Manager can\u2019t find the specified secret."
)
assert json_data["__type"] == "ResourceNotFoundException" assert json_data["__type"] == "ResourceNotFoundException"
@ -306,9 +298,7 @@ def test_rotate_secret_that_does_not_exist():
) )
json_data = json.loads(rotate_secret.data.decode("utf-8")) json_data = json.loads(rotate_secret.data.decode("utf-8"))
assert ( assert json_data["message"] == "Secrets Manager can't find the specified secret."
json_data["message"] == "Secrets Manager can\u2019t find the specified secret."
)
assert json_data["__type"] == "ResourceNotFoundException" assert json_data["__type"] == "ResourceNotFoundException"
@ -330,9 +320,7 @@ def test_rotate_secret_that_does_not_match():
) )
json_data = json.loads(rotate_secret.data.decode("utf-8")) json_data = json.loads(rotate_secret.data.decode("utf-8"))
assert ( assert json_data["message"] == "Secrets Manager can't find the specified secret."
json_data["message"] == "Secrets Manager can\u2019t find the specified secret."
)
assert json_data["__type"] == "ResourceNotFoundException" assert json_data["__type"] == "ResourceNotFoundException"

View File

@ -173,6 +173,27 @@ def test_publish_to_sqs_msg_attr_byte_value():
) )
@mock_sqs
@mock_sns
def test_publish_to_sqs_msg_attr_number_type():
sns = boto3.resource("sns", region_name="us-east-1")
topic = sns.create_topic(Name="test-topic")
sqs = boto3.resource("sqs", region_name="us-east-1")
queue = sqs.create_queue(QueueName="test-queue")
topic.subscribe(Protocol="sqs", Endpoint=queue.attributes["QueueArn"])
topic.publish(
Message="test message",
MessageAttributes={"retries": {"DataType": "Number", "StringValue": "0"}},
)
message = json.loads(queue.receive_messages()[0].body)
message["Message"].should.equal("test message")
message["MessageAttributes"].should.equal(
{"retries": {"Type": "Number", "Value": 0}}
)
@mock_sns @mock_sns
def test_publish_sms(): def test_publish_sms():
client = boto3.client("sns", region_name="us-east-1") client = boto3.client("sns", region_name="us-east-1")