python-bigquery-pandas icon indicating copy to clipboard operation
python-bigquery-pandas copied to clipboard

tests.system.test_gbq.TestToGBQIntegration: test_upload_data_tokyo_non_existing_dataset failed

Open flaky-bot[bot] opened this issue 4 years ago • 2 comments

This test failed!

To configure my behavior, see the Flaky Bot documentation.

If I'm commenting on this issue too often, add the flakybot: quiet label and I will stop commenting.


commit: a63ea5d825fc103af8120b8d0b0a5724056d0740 buildURL: Build Status, Sponge status: failed

Test output
self = 
project_id = 'precise-truck-742'
random_dataset_id = 'python_bigquery_pandas_tests_system_20220103140840_25ab46'
bigquery_client = 
def test_upload_data_tokyo_non_existing_dataset(
    self, project_id, random_dataset_id, bigquery_client
):
    from google.cloud import bigquery

    test_size = 10
    df = make_mixed_dataframe_v2(test_size)
    non_existing_tokyo_dataset = random_dataset_id
    non_existing_tokyo_destination = "{}.to_gbq_test".format(
        non_existing_tokyo_dataset
    )

    # Initialize table with sample data
    gbq.to_gbq(
        df,
        non_existing_tokyo_destination,
        project_id,
        credentials=self.credentials,
        location="asia-northeast1",
    )
  table = bigquery_client.get_table(
        bigquery.TableReference(
            bigquery.DatasetReference(project_id, non_existing_tokyo_dataset),
            "to_gbq_test",
        )
    )

tests/system/test_gbq.py:1456:


.nox/system-3-8/lib/python3.8/site-packages/google/cloud/bigquery/client.py:1034: in get_table api_response = self._call_api( .nox/system-3-8/lib/python3.8/site-packages/google/cloud/bigquery/client.py:782: in _call_api return call() .nox/system-3-8/lib/python3.8/site-packages/google/api_core/retry.py:283: in retry_wrapped_func return retry_target( .nox/system-3-8/lib/python3.8/site-packages/google/api_core/retry.py:190: in retry_target return target()


self = <google.cloud.bigquery._http.Connection object at 0x7ffacfc9ee20> method = 'GET' path = '/projects/precise-truck-742/datasets/python_bigquery_pandas_tests_system_20220103140840_25ab46/tables/to_gbq_test' query_params = None, data = None, content_type = None, headers = None api_base_url = None, api_version = None, expect_json = True _target_object = None, timeout = None

def api_request(
    self,
    method,
    path,
    query_params=None,
    data=None,
    content_type=None,
    headers=None,
    api_base_url=None,
    api_version=None,
    expect_json=True,
    _target_object=None,
    timeout=_DEFAULT_TIMEOUT,
):
    """Make a request over the HTTP transport to the API.

    You shouldn't need to use this method, but if you plan to
    interact with the API using these primitives, this is the
    correct one to use.

    :type method: str
    :param method: The HTTP method name (ie, ``GET``, ``POST``, etc).
                   Required.

    :type path: str
    :param path: The path to the resource (ie, ``'/b/bucket-name'``).
                 Required.

    :type query_params: dict or list
    :param query_params: A dictionary of keys and values (or list of
                         key-value pairs) to insert into the query
                         string of the URL.

    :type data: str
    :param data: The data to send as the body of the request. Default is
                 the empty string.

    :type content_type: str
    :param content_type: The proper MIME type of the data provided. Default
                         is None.

    :type headers: dict
    :param headers: extra HTTP headers to be sent with the request.

    :type api_base_url: str
    :param api_base_url: The base URL for the API endpoint.
                         Typically you won't have to provide this.
                         Default is the standard API base URL.

    :type api_version: str
    :param api_version: The version of the API to call.  Typically
                        you shouldn't provide this and instead use
                        the default for the library.  Default is the
                        latest API version supported by
                        google-cloud-python.

    :type expect_json: bool
    :param expect_json: If True, this method will try to parse the
                        response as JSON and raise an exception if
                        that cannot be done.  Default is True.

    :type _target_object: :class:`object`
    :param _target_object:
        (Optional) Protected argument to be used by library callers. This
        can allow custom behavior, for example, to defer an HTTP request
        and complete initialization of the object at a later time.

    :type timeout: float or tuple
    :param timeout: (optional) The amount of time, in seconds, to wait
        for the server response.

        Can also be passed as a tuple (connect_timeout, read_timeout).
        See :meth:`requests.Session.request` documentation for details.

    :raises ~google.cloud.exceptions.GoogleCloudError: if the response code
        is not 200 OK.
    :raises ValueError: if the response content type is not JSON.
    :rtype: dict or str
    :returns: The API response payload, either as a raw string or
              a dictionary if the response is valid JSON.
    """
    url = self.build_api_url(
        path=path,
        query_params=query_params,
        api_base_url=api_base_url,
        api_version=api_version,
    )

    # Making the executive decision that any dictionary
    # data will be sent properly as JSON.
    if data and isinstance(data, dict):
        data = json.dumps(data)
        content_type = "application/json"

    response = self._make_request(
        method=method,
        url=url,
        data=data,
        content_type=content_type,
        headers=headers,
        target_object=_target_object,
        timeout=timeout,
    )

    if not 200 <= response.status_code < 300:
      raise exceptions.from_http_response(response)

E google.api_core.exceptions.NotFound: 404 GET https://bigquery.googleapis.com/bigquery/v2/projects/precise-truck-742/datasets/python_bigquery_pandas_tests_system_20220103140840_25ab46/tables/to_gbq_test?prettyPrint=false: Not found: Dataset precise-truck-742:python_bigquery_pandas_tests_system_20220103140840_25ab46

.nox/system-3-8/lib/python3.8/site-packages/google/cloud/_http/init.py:480: NotFound

flaky-bot[bot] avatar Jan 03 '22 14:01 flaky-bot[bot]

Looks like this issue is flaky. :worried:

I'm going to leave this open and stop commenting.

A human should fix and close this.


When run at the same commit (a63ea5d825fc103af8120b8d0b0a5724056d0740), this test passed in one build (Build Status, Sponge) and failed in another build (Build Status, Sponge).

flaky-bot[bot] avatar Jan 03 '22 14:01 flaky-bot[bot]

Discussed offline. Datasets are strongly consistent within a region, but we aren't using regional endpoints to create the table, so likely occasionally get a stale read for if the dataset exists.

This is likely best solved by moving away from the table create/delete behavior (https://github.com/googleapis/python-bigquery-pandas/issues/118, https://github.com/googleapis/python-bigquery-pandas/issues/339). We can use server-side write disposition to handle the table creation step, and the location will be routed correctly with a load job. Schema modification is also supported during a load job via the schema update options in the job configuration.

tswast avatar Jan 05 '22 20:01 tswast