That function's namespace looks like this:
```
def create_internet_gateway(self, dry_run=False):
```
So when we pass in the vpc_id opbject in the test, the check later
in the function sets `dry_run=True` since the vpc_id opbject exists.
This later throws JSONResponseErrors because the `DryRun` flag is
set. This error raising functionality was added in the most recent
version of moto, which exposed this bug.
This fixes the three boto_vpc_test unit state tests. We'll see if
other tests need to be addressed in other files on a full test run.
These tests now not only test the new functionality added for matching
on source URI and source_hash_name, but also test non-specific hash_type
lookups, specific hash_type lookups, and failed specific hash_type
lookups (i.e. requesting a hash type not present in the file).
Here is the stack trace that happens when running file.line with
mode=replace on a file that exists, but is empty, as described in
the bug report:
unit.modules.file_test.FileModuleTestCase.test_replace_line_in_empty_file .................................................
Traceback (most recent call last):
File "/root/SaltStack/salt/tests/unit/modules/file_test.py", line 593, in test_replace_line_in_empty_file
mode='replace'))
File "/root/SaltStack/salt/salt/modules/file.py", line 1523, in line
for line in body.split(os.linesep)])
TypeError: expected a character buffer object
* Fix typo in profile example ('private_key' listed twice)
* Reflect Joyent's current new naming convention for VM sizes
* Update docs with modern images that are officially supported by Joyent
* Refresh example output of --list-sizes and --list-images
It would appear that if an attribute error is raised when trying to detect a class atter,
that the test suite does not run the class teardown method but continues regardless. This
fixes the class attr error which then allows the teardown to run. Prior to this, if the
teardown did not run, the entire suite would hang out shutdown because it was blocked
on waiting for a ioloop to terminate.
When running the tests with the tcp transport, we are not as forgiving
with the minion connection process as we are in ZMQ. In ZMQ, we attempt
to connect to the master. If it isn't up yet, we wait and try again. In
TCP, we try to connect to the master once, realize it's not up (because
the master process takes longer to spin up than the minions) and crash
and bail out.
This just gives the master a little more time to come up by having the
minions try to connect a couple more times.