库 - google.cloud:对象没有属性“schema_from_json”

我正在尝试使用 google.cloud.bigquery.client.Client 库中的“schema_from_json”属性,但它没有找到该属性,并且它出现在库文档中。


我已经更新了库,但它保持不变。


我的 Python 版本是 3.7


来源:https ://googleapis.github.io/google-cloud-python/latest/bigquery/generated/google.cloud.bigquery.client.Client.html


from google.cloud import bigquery

dir(bigquery.client.Client)

['SCOPE',

 '_SET_PROJECT',

 '__class__',

 '__delattr__',

 '__dict__',

 '__dir__',

 '__doc__',

 '__eq__',

 '__format__',

 '__ge__',

 '__getattribute__',

 '__getstate__',

 '__gt__',

 '__hash__',

 '__init__',

 '__init_subclass__',

 '__le__',

 '__lt__',

 '__module__',

 '__ne__',

 '__new__',

 '__reduce__',

 '__reduce_ex__',

 '__repr__',

 '__setattr__',

 '__sizeof__',

 '__str__',

 '__subclasshook__',

 '__weakref__',

 '_call_api',

 '_determine_default',

 '_do_multipart_upload',

 '_do_resumable_upload',

 '_get_query_results',

 '_http',

 '_initiate_resumable_upload',

 'cancel_job',

 'copy_table',

 'create_dataset',

 'create_table',

 'dataset',

 'delete_dataset',

 'delete_table',

 'extract_table',

 'from_service_account_json',

 'get_dataset',

 'get_job',

 'get_service_account_email',

 'get_table',

 'insert_rows',

 'insert_rows_json',

 'job_from_resource',

 'list_datasets',

 'list_jobs',

 'list_partitions',

 'list_projects',

 'list_rows',

 'list_tables',

 'load_table_from_dataframe',

 'load_table_from_file',

 'load_table_from_uri',

 'location',

 'query',

 'update_dataset',

 'update_table']


海绵宝宝撒
浏览 131回答 1
1回答

慕沐林林

我从 cloud shell 进行了测试,它可以工作。这里是cloud shell的pip依赖:google-cloud-bigquery          1.18.0这是我的工作代码from google.cloud import bigqueryclient = bigquery.Client()dataset_id = 'us_dataset'dataset_ref = client.dataset(dataset_id)job_config = bigquery.LoadJobConfig()# I use from file path versionschema_dict = client.schema_from_json("schemaname")print(schema_dict)job_config.schema = schema_dictjob_config.write_disposition = bigquery.WriteDisposition.WRITE_TRUNCATEjob_config.create_disposition = bigquery.CreateDisposition.CREATE_IF_NEEDED# The source format defaults to CSV, so the line below is optional.job_config.source_format = bigquery.SourceFormat.CSVuri = "gs://MY_BUCKET/name.csv"load_job = client.load_table_from_uri(    uri, dataset_ref.table("name"), job_config=job_config)  # API requestprint("Starting job {}".format(load_job.job_id))load_job.result()  # Waits for table load to complete.print("Job finished.")destination_table = client.get_table(dataset_ref.table("name"))print("Loaded {} rows.".format(destination_table.num_rows))我用这个命令生成模式文件:bq show --schema us_dataset.name > schemaname结果在这里[{"type":"STRING","name":"name","mode":"NULLABLE"},{"type":"STRING","name":"id","mode":"NULLABLE"}]
打开App,查看更多内容
随时随地看视频慕课网APP

相关分类

Python