A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://github.com/pytest-dev/pytest/issues/2180 below:

Importing files during tests fails with patched open builtin · Issue #2180 · pytest-dev/pytest · GitHub

Hello,

When using PyTest to run a python unittest, PyTest fails at some tests that patch builtins.open.

Basically the use-case is that i am trying to test a function which would call Consul to read some configuration value and then validate some JSON file. When the configuration value is being read through python-consul which in turn makes a call through requests, the failure takes place.

Python Version: 3.5.2
Package Versions:

> pip list
pip (8.1.1)
py (1.4.32)
pytest (3.0.5)
requests (2.12.4)
setuptools (20.10.1)

Running MacOS Sierra, but i've also been able to reproduce this inside a debian docker image:

uname -a
Linux a77d12c4f722 3.19.0-15-generic #15-Ubuntu SMP Thu Apr 16 23:32:37 UTC 2015 x86_64 GNU/Linux

I've written a small script which can reproduce the issue:

import unittest
import requests
from unittest.mock import patch, mock_open


class PyTestIssue(unittest.TestCase):
    def test_open(self):
        with patch('builtins.open', mock_open(read_data='')):
            requests.get('http://www.google.com')

The full stacktrace:

============================= test session starts ==============================
platform darwin -- Python 3.5.2, pytest-3.0.5, py-1.4.32, pluggy-0.4.0
rootdir: /Users/rakan/pytest-test, inifile:
collected 1 items

test_file_read_write.py F

=================================== FAILURES ===================================
____________________________ PyTestIssue.test_open _____________________________

name = 'netrc', path = None, target = None

>   ???
E   AttributeError: 'AssertionRewritingHook' object has no attribute 'find_spec'

<frozen importlib._bootstrap>:890: AttributeError

During handling of the above exception, another exception occurred:

self = <test_file_read_write.PyTestIssue testMethod=test_open>

    def test_open(self):
        with patch('builtins.open', mock_open(read_data='')):
>           requests.get('http://www.google.com')

test_file_read_write.py:9:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../.pyenv/versions/3.5.2/envs/pyenv-issue/lib/python3.5/site-packages/requests/api.py:70: in get
    return request('get', url, params=params, **kwargs)
../.pyenv/versions/3.5.2/envs/pyenv-issue/lib/python3.5/site-packages/requests/api.py:56: in request
    return session.request(method=method, url=url, **kwargs)
../.pyenv/versions/3.5.2/envs/pyenv-issue/lib/python3.5/site-packages/requests/sessions.py:474: in request
    prep = self.prepare_request(req)
../.pyenv/versions/3.5.2/envs/pyenv-issue/lib/python3.5/site-packages/requests/sessions.py:394: in prepare_request
    auth = get_netrc_auth(request.url)
../.pyenv/versions/3.5.2/envs/pyenv-issue/lib/python3.5/site-packages/requests/utils.py:113: in get_netrc_auth
    from netrc import netrc, NetrcParseError
<frozen importlib._bootstrap>:969: in _find_and_load
    ???
<frozen importlib._bootstrap>:954: in _find_and_load_unlocked
    ???
<frozen importlib._bootstrap>:892: in _find_spec
    ???
<frozen importlib._bootstrap>:873: in _find_spec_legacy
    ???
../.pyenv/versions/3.5.2/envs/pyenv-issue/lib/python3.5/site-packages/_pytest/assertion/rewrite.py:75: in find_module
    fd, fn, desc = imp.find_module(lastname, path)
../.pyenv/versions/3.5.2/lib/python3.5/imp.py:301: in find_module
    encoding = tokenize.detect_encoding(file.readline)[0]
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

readline = <MagicMock name='open().readline' id='4541986128'>

    def detect_encoding(readline):
        """
        The detect_encoding() function is used to detect the encoding that should
        be used to decode a Python source file.  It requires one argument, readline,
        in the same way as the tokenize() generator.

        It will call readline a maximum of twice, and return the encoding used
        (as a string) and a list of any lines (left as bytes) it has read in.

        It detects the encoding from the presence of a utf-8 bom or an encoding
        cookie as specified in pep-0263.  If both a bom and a cookie are present,
        but disagree, a SyntaxError will be raised.  If the encoding cookie is an
        invalid charset, raise a SyntaxError.  Note that if a utf-8 bom is found,
        'utf-8-sig' is returned.

        If no encoding is specified, then the default of 'utf-8' will be returned.
        """
        try:
            filename = readline.__self__.name
        except AttributeError:
            filename = None
        bom_found = False
        encoding = None
        default = 'utf-8'
        def read_or_stop():
            try:
                return readline()
            except StopIteration:
                return b''

        def find_cookie(line):
            try:
                # Decode as UTF-8. Either the line is an encoding declaration,
                # in which case it should be pure ASCII, or it must be UTF-8
                # per default encoding.
                line_string = line.decode('utf-8')
            except UnicodeDecodeError:
                msg = "invalid or missing encoding declaration"
                if filename is not None:
                    msg = '{} for {!r}'.format(msg, filename)
                raise SyntaxError(msg)

            match = cookie_re.match(line_string)
            if not match:
                return None
            encoding = _get_normal_name(match.group(1))
            try:
                codec = lookup(encoding)
            except LookupError:
                # This behaviour mimics the Python interpreter
                if filename is None:
                    msg = "unknown encoding: " + encoding
                else:
                    msg = "unknown encoding for {!r}: {}".format(filename,
                            encoding)
                raise SyntaxError(msg)

            if bom_found:
                if encoding != 'utf-8':
                    # This behaviour mimics the Python interpreter
                    if filename is None:
                        msg = 'encoding problem: utf-8'
                    else:
                        msg = 'encoding problem for {!r}: utf-8'.format(filename)
                    raise SyntaxError(msg)
                encoding += '-sig'
            return encoding

        first = read_or_stop()
>       if first.startswith(BOM_UTF8):
E       TypeError: startswith first arg must be str or a tuple of str, not bytes

../.pyenv/versions/3.5.2/lib/python3.5/tokenize.py:426: TypeError
=========================== 1 failed in 0.23 seconds ===========================

Here's a quick checklist in what to include:


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4