Python 调用 GPT-3 API

GPT-3 是上一年由 Open AI 推出的言语机器学习模型。它因其能够写作、写歌、写诗,乃至写代码而取得了广泛的媒体关注!该东西免费运用,只需要注册一个电子邮件即可。

GPT-3 是一种叫 transformer 的机器学习模型。详细来说,它就是 Generative Pre-training Transformer,因而叫做“GPT”。Transformer 架构运用自我留意和强化学习来模拟会话文本。一般,它一次处理一个单词,并运用前面的单词猜测序列中的下一个单词。

GPT-3 具有广泛的运用场景,包括科学、艺术和技能等所有范畴。它能够用来答复有关科学和数学的基本问题。乃至能够精确答复研讨生级别的数学和科学概念相关的问题。更令人惊讶的是,我问询了一些与我的物理化学博士研讨有关的问题,它能够供给较好的解说。不过,它也有其局限性。当我问询 GPT-3 有关物理化学中更新奇的研讨方法时,它无法供给清晰的答案。因而,在作为教育和研讨的搜索引擎运用时,应该慎重运用 GPT-3。GPT-3 没有现实核查功能。跟着现实核查功能的进步,我能够幻想 GPT-3 在研讨生阶段乃至在研讨范畴将十分有用。

此外,除了我个人的经验外,我还看到了其他许多很帅的东西运用。例如,一个开发人员运用 GPT-3 来编排完结杂乱使命的云服务。其他用户运用 GPT-3 生成了作业的 python 和 SQL 脚本,以及其他言语的程序。在艺术范畴,用户请 GPT-3 写一篇比较现代和当代艺术的文章。GPT-3 的潜在运用几乎在任何范畴都是丰富的。

GPT-3 在答复有精确内容的基本问题方面体现得很好。例如,它能够对光合作用做出适当不错的解说。它不能很好地答复关于光合作用的前沿研讨问题,例如,它不能描述光合作用的机理和涉及的量子概念。它能够给出体面的回应,但不太或许供给大多数研讨问题的技能细节。相同,GPT-3 能够编写一些简略的作业代码,可是跟着使命的杂乱度添加,生成的代码就越简略犯错。它也不能生成政治观念、伦理价值观、出资主张、精确的新闻报道等一般是由人类生成的内容。

尽管 GPT-3 有其局限性,但其广泛适用性令人形象深入。我认为提出一些风趣的数据科学和机器学习提示,以看看它们是否能够补充数据科学作业流程的部分是风趣的。

首先,咱们将依据一些简略的提示生成一些与数据科学有关的文本。一旦咱们对该东西有了一些了解,就能够问询一些能够协助解决数据科学使命的问题。有几个风趣的数据科学和机器学习问题,咱们能够向 GPT-3 问询。例如,是否能够运用 GPT-3 源自公开可用的数据集?GPT-3 的练习数据有多少等。另一个风趣的运用是问题框架。 GPT-3 能够协助用户构建良好的机器学习研讨问题吗?尽管它难以给出详细的技能答案,但也许它能够很好地构建出未解决的研讨问题。

另一个很帅的运用是运用 GPT-3 来决议用于特定运用程序的 ML 模型。这很好,因为对于在线文献丰富的经过验证的技能,它应该能够很好地协助用户挑选模型,并解说为什么选定的模型最适合。最终,咱们能够尝试运用GPT-3 编写一些数据科学使命的 Python 代码。例如,咱们将看看是否能够运用它来编写生成特定用例的组成数据的代码。

留意:GPT-3 API 的成果是不确定的。因而,您取得的成果或许与此处显示的输出略有不同。此外,因为 GPT-3 没有现实核查机制,主张您对计划用于作业,校园或个人项目的任何现实成果进行两层核查。

在这项作业中,我将在 Deepnote 中编写代码,它是一个协作数据科学笔记本,使得运行可再现实验十分简略。

装置 GPT-3

首先,让咱们到 Deepnote 并创建一个新项目(假如您还没有账户,能够免费注册)。

创建一个名为“GPT3”的项目以及该项目中的一个名为“GPT3_ds”的 notebook。

接下来,咱们在第一个单元中运用 pip 装置 OpenAI:

%pip install openai
%pip install catboost

将密钥保存在 openAI 目标的 api_key 特点:

import openai
openai.api_key = "your-key"

接下来就能够提问了,比如问“什么是 Pandas 库”,GP3 会给反馈:

completion = openai.Completion.create(engine="text-davinci-003", prompt="What is the pandas library?", max_tokens=1000)
print(completion.choices[0]['text'])
# output
Pandas is an open source software library written in Python for data manipulation and analysis. Pandas is widely used in data science, machine learning and many other fields. It provides high-level data structures and tools for handling and manipulating data, including data frames, series, plotting tools and more.

咱们乃至能够问询更详细的问题,例如“Pandas 的一些常见用处是什么?”。它给出了合理的答案,列出了数据收拾、数据可视化、数据聚合和时间序列剖析:

completion = openai.Completion.create(engine="text-davinci-003", prompt="what are some common Pandas use cases?", max_tokens=240)
print(completion.choices[0]['text'])
# output
1. Data Cleaning and Transformation
2. Data Analysis and Exploration
3. Time Series Analysis
4. Data Visualization
5. Statistical Modeling
6. Predictive Modeling
7. Machine Learning
8. Web Scraping

问询“最常见的深度学习库是什么?”:

#what are the most common deep learning libraries?
completion = openai.Completion.create(engine="text-davinci-003", prompt="what are the most common deep learning libraries?", max_tokens=240)
print(completion.choices[0]['text'])
# output
. TensorFlow 
2. PyTorch 
3. Keras 
4. Caffe 
5. CNTK 
6. MXNet 
7. Theano 
8. Deeplearning4j 
9. Gensim 
10. LUNA

乃至能够问询“什么是深度神经网络?”:

completion = openai.Completion.create(engine="text-davinci-003", prompt="What is a deep neural network?", max_tokens=1000)
print(completion.choices[0]['text'])
# output
A deep neural network (DNN) is a type of artificial neural network (ANN) with multiple layers of neurons between the input and output layers. DNNs are designed to learn complex non-linear relationships from data, and have been successfully applied in a wide range of areas such as image recognition, natural language processing, and financial forecasting.

运用 GPT-3 进行现实搜索应该慎重进行,因为没有任何现实核查机制。在大多数情况下,假如您想取得关于数据科学和机器学习的众多在线博客和论坛中已知概念的表面了解,GPT-3 应该能够很好地作业。

特征工程

GPT-3 的另一个风趣的运用事例是用作指导数据科学作业的辅助东西。例如,GPT-3 能否给出一些改善模型功能的特征改换的想法?让咱们问询 GPT-3:

completion = openai.Completion.create(engine="text-davinci-003", prompt="give some ideas on feature transformations that can improve model performance", max_tokens=1000)
print(completion.choices[0]['text'])
# output
1. Standardization/Normalization: A common feature transform used to ensure features are on the same scale, standardizing or normalizing variables can help limit the severity of outliers and improve the overall model performance.
2. Feature Binning: Binning is a process of transforming numerical variables into categorical ones. This can be useful when working with variables that have too many levels and can have a significant effect on the model performance.
3. Polynomial Expansion: When a nonlinear relationship is expected between features and the output variable, a polynomial expansion feature transformation can help improve model performance.
4. Feature Selection: Removing redundant or irrelevant features from the dataset can help improve the model performance as these features may lead to overfitting.
5. Ensemble: Combining different types of models (or different versions of the same model) can often improve performance due to their combined capabilities.

咱们看到它给出了一些很好的特征改换主张以及每个改换的解说。

让咱们看看是否能够更进一步。让它写一些 Python 代码:

completion = openai.Completion.create(engine="text-davinci-003", prompt="Write example python code that performs data standardization", max_tokens=1000)
print(completion.choices[0]['text'])
#output
# Import the necessary libraries
import numpy as np
# Define the data 
data = np.array([[-3, 9, 0, 8],
                 [ 4, 6, 5, 12],
                 [20, 2, 3, 15]])
# Calculate mean and standard deviation
mean = np.mean(data, axis=0)
std = np.std(data, axis=0)
# Perform data standardization
standardized_data = (data - mean) / std
# Print the results
print(standardized_data)

仿制并粘贴到一个新单元格中并运行它:

# Import the necessary libraries
import numpy as np
# Define the data 
data = np.array([[-3, 9, 0, 8],
                 [ 4, 6, 5, 12],
                 [20, 2, 3, 15]])
# Calculate mean and standard deviation
mean = np.mean(data, axis=0)
std = np.std(data, axis=0)
# Perform data standardization
standardized_data = (data - mean) / std
# Print the results
print(standardized_data)
# output
[[-1.03881504  1.16247639 -1.29777137 -1.27872403]
 [-0.31164451  0.11624764  1.13554995  0.11624764]
 [ 1.35045955 -1.27872403  0.16222142  1.16247639]]

接下来,让咱们对特征标准化改换做相同的操作:

completion = openai.Completion.create(engine="text-davinci-003", prompt="Write example python code that performs data normalization on fake data", max_tokens=1000)
print(completion.choices[0]['text'])
# output
# Normalizing data will rescale features in the range [0,1]
data = [3, 7, 10, 13] # Sample data
# Calculate the maximum and minimum of the data
max_data = max(data)
min_data = min(data)
# Normalize the data
normalized_data = [(x-min_data)/(max_data-min_data) for x in data]
# Print first value to check 
print(normalized_data[0]) # Prints 0.2

履行返回的代码:

# Normalizing data will rescale features in the range [0,1]
data = [3, 7, 10, 13] # Sample data
# Calculate the maximum and minimum of the data
max_data = max(data)
min_data = min(data)
# Normalize the data
normalized_data = [(x-min_data)/(max_data-min_data) for x in data]
# Print first value to check 
print(normalized_data) 

模型挑选

另一个酷的运用是模型挑选。让咱们看看 GPT-3 是否能够指导咱们挑选机器学习模型。问询“怎么挑选时间序列模型?”

completion = openai.Completion.create(engine="text-davinci-003", prompt="How do i select a time series model?", max_tokens=1000)
print(completion.choices[0]['text'])
# output
The selection of a time series model depends on the type of data that is being analyzed. If there are long-term trends or cycles in the data, then an autoregressive integrated moving average (ARIMA) model or a exponential smoothing (ETS) model may be the best option. If the data are seasonal or there are outliers present, then a Holt-Winters model may be a better choice. If the data do not contain any long-term trends or seasonality, a Box-Jenkins model may be suitable. Regardless of the model selected, it is important to consider the type of data that is being modeled when selecting a time series model.

答案是合理的,列出了平稳性、季节性、趋势以及模型功能评价的检查。让咱们问询更详细的问题,例如“何时运用 ARIMA vs Prophet?”:

completion = openai.Completion.create(engine="text-davinci-003", prompt="When to use ARIMA vs FB prophet?", max_tokens=1000)
print(completion.choices[0]['text'])
# output
ARIMA and FB Prophet are both used for time series forecasting. ARIMA is a linear model and is better for predicting long-term trends with stationary data. FB Prophet is a non-linear model and is better for predicting short-term trends with non-stationary data. So, you would use ARIMA when you are mainly concerned with predicting long-term trends, and you would use FB Prophet when you are mainly concerned with predicting short-term trends.

生成组成数据

能用 GPT-3 生成组成数据吗?让咱们看一看!咱们将生成一个众所周知的数据:医院再入院和交易数据。

  • 医院再入院数据

问询 GPT-3:“编写 python 代码,生成组成医院再入院数据”。

completion = openai.Completion.create(engine="text-davinci-003", prompt="Write example python code that generates synthetic healthcare readmission data stored in a dataframe", max_tokens=1000)
print(completion.choices[0]['text'])
# output
import pandas as pd
import numpy as np
# Create Dataframe 
df = pd.DataFrame(columns=['Patient_ID', 'Age', 'Admission_Type', 'Readmitted'])
# Generate Data
np.random.seed(0)
for i in range(10):
  admission_type = np.random.choice(['Urgent', 'Scheduled', 'Emergency'])
  patient_age = np.random.randint(18, 80)
  readmission = np.random.choice([0, 1])
  df.loc[i] = [i+1, patient_age, admission_type, readmission]
# Print Dataframe to Console
print(df)

履行此代码:

import pandas as pd
import numpy as np
# Create Dataframe 
df = pd.DataFrame(columns=['Patient_ID', 'Age', 'Admission_Type', 'Readmitted'])
# Generate Data
np.random.seed(0)
for i in range(10):
  admission_type = np.random.choice(['Urgent', 'Scheduled', 'Emergency'])
  patient_age = np.random.randint(18, 80)
  readmission = np.random.choice([0, 1])
  df.loc[i] = [i+1, patient_age, admission_type, readmission]
# Print Dataframe to Console
df

输出成果:

用 Python 调用 GPT-3 API

让咱们看看是否能够用这个组成数据构建一个分类模型,猜测从头入院的人,并评价功能。

completion = openai.Completion.create(engine="text-davinci-003", prompt="Write example python code that generates synthetic healthcare readmission data stored in a dataframe. From this write code that builds a catboost model that predicts readmission outcomes. Also write code to calculate and print performance", max_tokens=3000)
print(completion.choices[0]['text'])
# output
 metrics
## Generate Synthetic Healthcare Readmission Data
import pandas as pd 
import numpy as np 
# set the seed for reproducibility 
np.random.seed(1)
# create dataframe 
df = pd.DataFrame(np.random.randint(0, 100, size=(100, 10)), columns=['age','gender','length_of_stay','diagnosis','NIV','laboratory','past_hospitalizations','medications','bmi','readmission'])
# add labels to data frame 
df['age'] = np.random.randint(20, 80, size=(100))
df['gender'] = np.random.randint(1, 2, size=(100))
df['length_of_stay'] = np.random.randint(2, 14, size=(100))
df['diagnosis'] = np.random.randint(1, 5, size=(100))
df['NIV'] = np.random.randint(0, 2, size=(100))
df['laboratory'] = np.random.randint(1, 6, size=(100))
df['past_hospitalizations'] = np.random.randint(0, 10, size=(100))
df['medications'] = np.random.randint(1, 6, size=(100))
df['bmi'] = np.random.randint(18, 35, size=(100))
df['readmission'] = np.random.randint(0, 2, size=(100))
# print the dataframe 
print(df)
## Build a CatBoost Model
from catboost import CatBoostClassifier
from sklearn.metrics import confusion_matrix
# separate X and y
X = df.iloc[:, 0:9]
y = df.iloc[:, 9]
# initialize catboost classifier 
cat_clf = CatBoostClassifier(iterations=50,
                            learning_rate=0.3,
                            depth=8,
                            eval_metric='Accuracy',
                            random_seed=42)
# fit the model 
cat_clf.fit(X, y)
# predict values
y_pred = cat_clf.predict(X)
# print confusion matrix
conf_mat = confusion_matrix(y, y_pred)
print(conf_mat)
## Calculate and Print Performance Metrics
from sklearn.metrics import accuracy_score, precision_score, recall_score
# calculate performance metrics
acc = accuracy_score(y, y_pred)
precision = precision_score(y, y_pred)
recall = recall_score(y, y_pred)
# print performance metrics 
print("Accuracy: {}".format(acc))
print("Precision: {}".format(precision))
print("Recall: {}".format(recall))

仿制上面的代码,并履行:

## Generate Synthetic Healthcare Readmission Data
import pandas as pd 
import numpy as np 
# set the seed for reproducibility 
np.random.seed(1)
# create dataframe 
df = pd.DataFrame(np.random.randint(0, 100, size=(100, 10)), columns=['age','gender','length_of_stay','diagnosis','NIV','laboratory','past_hospitalizations','medications','bmi','readmission'])
# add labels to data frame 
df['age'] = np.random.randint(20, 80, size=(100))
df['gender'] = np.random.randint(1, 2, size=(100))
df['length_of_stay'] = np.random.randint(2, 14, size=(100))
df['diagnosis'] = np.random.randint(1, 5, size=(100))
df['NIV'] = np.random.randint(0, 2, size=(100))
df['laboratory'] = np.random.randint(1, 6, size=(100))
df['past_hospitalizations'] = np.random.randint(0, 10, size=(100))
df['medications'] = np.random.randint(1, 6, size=(100))
df['bmi'] = np.random.randint(18, 35, size=(100))
df['readmission'] = np.random.randint(0, 2, size=(100))
# print the dataframe 
print(df)
## Build a CatBoost Model
from catboost import CatBoostClassifier
from sklearn.metrics import confusion_matrix
# separate X and y
X = df.iloc[:, 0:9]
y = df.iloc[:, 9]
# initialize catboost classifier 
cat_clf = CatBoostClassifier(iterations=50,
                            learning_rate=0.3,
                            depth=8,
                            eval_metric='Accuracy',
                            random_seed=42)
# fit the model 
cat_clf.fit(X, y)
# predict values
y_pred = cat_clf.predict(X)
# print confusion matrix
conf_mat = confusion_matrix(y, y_pred)
print(conf_mat)
## Calculate and Print Performance Metrics
from sklearn.metrics import accuracy_score, precision_score, recall_score
# calculate performance metrics
acc = accuracy_score(y, y_pred)
precision = precision_score(y, y_pred)
recall = recall_score(y, y_pred)
# print performance metrics 
print("Accuracy: {}".format(acc))
print("Precision: {}".format(precision))
print("Recall: {}".format(recall))
# output
  • 交易数据

问询 GPT-3:“编写 Python 代码,生成交易数据”。

completion = openai.Completion.create(engine="text-davinci-003", prompt="Write example python code that generates synthetic transaction data stored in a dataframe", max_tokens=1000)
print(completion.choices[0]['text'])
# output
import pandas as pd 
import numpy as np 
#create randomly generated customer data
customer_id = np.arange(1,101) 
customer_names = [f'John Doe {x}' for x in range(1,101)] 
#create randomly generated transaction data
transaction_id = np.arange(1,101)
dates = [f'2020-07-{x}' for x in range(1,101)]
amounts = np.random.randint(low=1, high=1000, size=(100,)) 
#create dataframe with randomly generated data
transaction_data = pd.DataFrame({'Customer ID': customer_id, 
                            'Customer Name': customer_names,
                            'Transaction ID': transaction_id, 
                            'Date': dates, 
                            'Amount': amounts})
print(transaction_data)

复制代码,并履行:

import pandas as pd
import numpy as np 
#create randomly generated customer data
customer_id = np.arange(1,101) 
customer_names = [f'John Doe {x}' for x in range(1,101)] 
#create randomly generated transaction data
transaction_id = np.arange(1,101)
dates = [f'2020-07-{x}' for x in range(1,101)]
amounts = np.random.randint(low=1, high=1000, size=(100,)) 
#create dataframe with randomly generated data
transaction_data = pd.DataFrame({'Customer ID': customer_id, 
                            'Customer Name': customer_names,
                            'Transaction ID': transaction_id, 
                            'Date': dates, 
                            'Amount': amounts})
transaction_data

(部分输出成果)

用 Python 调用 GPT-3 API

现在有物品 ID、客户和购买金额。让咱们看看是否能够更详细一点。再添加年龄、性别和邮政编码。

completion = openai.Completion.create(engine="text-davinci-003", prompt="Write example python code that generates synthetic transaction data stored in a dataframe. Include customer ID, transaction amount, item ID, item name, age, gender, and zipcode", max_tokens=2000)
print(completion.choices[0]['text'])
# output
import pandas as pd
import numpy as np
rows = ['customer_ID', 'transaction_amnt', 'item_ID', 'item_name', 'age', 'gender', 'zipcode']
data = pd.DataFrame(columns=rows)  
for i in range(1,100):
        customer_ID = int( np.random.uniform(100,600-100)) 
        transaction_amnt = np.random.uniform(1.25, 10.00)
        item_ID = int( np.random.uniform(1,35))
        item_name = np.random.choice(["phone", "tablet", "laptop", "smartwatch"])
        age = int( np.random.uniform(17,75)) 
        gender = np.random.choice(["male", "female"]) 
        zipcode = np.random.choice(["98101", "98200", "98469", "98801"])
        data.loc[i] = [customer_ID, transaction_amnt, item_ID, item_name, age, gender, zipcode]
print (data)

履行代码:

import pandas as pd
import numpy as np
rows = ['customer_ID', 'transaction_amnt', 'item_ID', 'item_name', 'age', 'gender', 'zipcode']
data = pd.DataFrame(columns=rows)  
for i in range(1,100):
        customer_ID = int( np.random.uniform(100,600-100)) 
        transaction_amnt = np.random.uniform(1.25, 10.00)
        item_ID = int( np.random.uniform(1,35))
        item_name = np.random.choice(["phone", "tablet", "laptop", "smartwatch"])
        age = int( np.random.uniform(17,75)) 
        gender = np.random.choice(["male", "female"]) 
        zipcode = np.random.choice(["98101", "98200", "98469", "98801"])
        data.loc[i] = [customer_ID, transaction_amnt, item_ID, item_name, age, gender, zipcode]
data

(部分输出成果)

用 Python 调用 GPT-3 API

公共数据集的问询提示

另一种运用是问询 GPT-3 关于公共数据集。让咱们问询 GPT-3 列出一些公共数据集:

completion = openai.Completion.create(engine="text-davinci-003", prompt=" list some good public datasets", max_tokens=1000)
print(completion.choices[0]['text'])
# output
1. US Census Data
2. Enron Email Dataset
3. Global Open Data Index
4. Air Quality Monitoring Data
5. New York City Taxi Trip Data
6. IMF Data
7. World Bank Open Data
8. Google Books Ngrams Dataset
9. Amazon Reviews Dataset
10. UCI Machine Learning Repository

让咱们看看是否能够找到依据 Apache 2.0 许可的公共数据。还问询源链接:

completion = openai.Completion.create(engine="text-davinci-003", prompt=" list some good public datasets under apache 2.0 license. provide links to their source", max_tokens=1000, temperature=0)
print(completion.choices[0]['text'])
# output
1. OpenStreetMap: https://www.openstreetmap.org/
2. US Census Data: https://www.census.gov/data.html
3. Google Books Ngrams: https://aws.amazon.com/datasets/google-books-ngrams/
4. Wikipedia: https://dumps.wikimedia.org/enwiki/
5. US Government Spending Data: https://www.usaspending.gov/
6. World Bank Open Data: https://data.worldbank.org/
7. Common Crawl: http://commoncrawl.org/
8. Open Images: https://storage.googleapis.com/openimages/web/index.html
9. OpenFlights: https://openflights.org/data.html
10. GDELT: http://data.gdeltproject.org/

尽管并不是所有这些链接都是正确的,但它在寻找源链接方面做得适当不错。Google Ngrams、Common Crawl和 NASA 数据都适当超卓。假如不供给数据的确切位置,在大多数情况下,它供给了一个能够找到数据的网页链接。

再恳求对数据进行描述。请留意,尽管成果或许重叠,但它们在每次运行时略有不同。据我所知,成果并不总是能够相同的:

completion = openai.Completion.create(engine="text-davinci-003", prompt=" list some good public datasets under apache 2.0 license. provide links to their source and descriptions", max_tokens=1000, temperature=0)
print(completion.choices[0]['text'])
# output
1. OpenStreetMap: OpenStreetMap is a free, editable map of the world, created and maintained by volunteers and available for use under an open license. It contains millions of data points, including roads, buildings, and points of interest. Source: https://www.openstreetmap.org/
2. Google Books Ngrams: Google Books Ngrams is a dataset of over 5 million books from Google Books, spanning from 1500 to 2008. It contains word counts for each year, allowing researchers to track the usage of words over time. Source: https://aws.amazon.com/datasets/google-books-ngrams/
3. Wikipedia: Wikipedia is a free, open-source encyclopedia with millions of articles in hundreds of languages. It is available for use under the Creative Commons Attribution-ShareAlike license. Source: https://www.wikipedia.org/
4. Common Crawl: Common Crawl is a large-scale web crawl that collects data from over 5 billion webpages. It is available for use under the Apache 2.0 license. Source: https://commoncrawl.org/
5. Open Images Dataset: The Open Images Dataset is a collection of 9 million images annotated with labels spanning over 6000 categories. It is available for use under the Apache 2.0 license. Source: https://storage.googleapis.com/openimages/web/index.html

机器学习问题收拾

最终一个示例,将看看 GPT-3 是否能够协助咱们收拾机器学习问题。

  • 问询用例

尽管 GPT-3 中的数据仅到2021年,但它依然能够协助咱们构建与今日依然相关的 ML 用例。让咱们问询“社交媒体中有哪些新兴的机器学习用例?”:

completion = openai.Completion.create(engine="text-davinci-003", prompt="What are some emerging machine learning use-cases in social media?", max_tokens=1000, temperature=0)
print(completion.choices[0]['text'])
# output
1. Automated Content Curation: Automatically curating content from social media platforms to create personalized content feeds for users.
2. Sentiment Analysis: Analyzing user sentiment from social media posts to gain insights into customer opinions and preferences.
3. Social Media Monitoring: Using machine learning algorithms to monitor social media conversations and detect potential issues or trends.
4. Social Media Advertising: Leveraging machine learning to optimize social media advertising campaigns and target the right audience.
5. Social Media Recommendations: Using machine learning to recommend content to users based on their interests and preferences.

咱们也能够看到在医疗范畴的回复:

completion = openai.Completion.create(engine="text-davinci-003", prompt="What are some emerging machine learning use-cases in healthcare?", max_tokens=1000, temperature=0)
print(completion.choices[0]['text'])
# output
1. Automated Diagnosis: Machine learning algorithms can be used to analyze patient data and medical images to detect and diagnose diseases.
2. Personalized Medicine: Machine learning algorithms can be used to analyze patient data and medical images to create personalized treatment plans for each patient.
3. Drug Discovery: Machine learning algorithms can be used to analyze large datasets of chemical compounds to identify potential new drugs.
4. Clinical Decision Support: Machine learning algorithms can be used to analyze patient data and medical images to provide clinicians with real-time decision support.
5. Predictive Analytics: Machine learning algorithms can be used to analyze patient data and medical images to predict future health outcomes.

问询研讨项目

现在让咱们看看它是否能够提出一些研讨问题。让咱们问询:“运用深度学习进行图画检测有哪些好的研讨项目?”

completion = openai.Completion.create(engine="text-davinci-003", prompt="What are some good research questions on using deep learning for image detection?", max_tokens=1000, temperature=0)
print(completion.choices[0]['text'])
# output
1. What are the most effective deep learning architectures for image detection?
2. How can deep learning be used to improve the accuracy of image detection?
3. What are the most effective methods for training deep learning models for image detection?
4. How can deep learning be used to detect objects in images with varying levels of complexity?
5. How can deep learning be used to detect objects in images with varying levels of illumination?
6. How can deep learning be used to detect objects in images with varying levels of noise?
7. How can deep learning be used to detect objects in images with varying levels of resolution?
8. How can deep learning be used to detect objects in images with varying levels of occlusion?
9. How can deep learning be used to detect objects in images with varying levels of background clutter?
10. How can deep learning be used to detect objects in images with varying levels of rotation?

再问一下 NLP 方向:

completion = openai.Completion.create(engine="text-davinci-003", prompt="What are some good research questions related to NLP transformer models?", max_tokens=1000, temperature=0)
print(completion.choices[0]['text'])
# output
1. How can transformer models be used to improve the accuracy of natural language processing tasks?
2. What are the most effective methods for training transformer models for natural language processing tasks?
3. How can transformer models be used to improve the efficiency of natural language processing tasks?
4. What are the most effective methods for optimizing transformer models for natural language processing tasks?
5. How can transformer models be used to improve the interpretability of natural language processing tasks?
6. What are the most effective methods for deploying transformer models for natural language processing tasks?
7. How can transformer models be used to improve the scalability of natural language processing tasks?
8. What are the most effective methods for combining transformer models with other natural language processing techniques?
9. How can transformer models be used to improve the robustness of natural language processing tasks?
10. What are the most effective methods for evaluating transformer models for natural language processing tasks?

本文所有代码都发布在 GitHub 上。

声明:本站所有文章,如无特殊说明或标注,均为本站原创发布。任何个人或组织,在未征得本站同意时,禁止复制、盗用、采集、发布本站内容到任何网站、书籍等各类媒体平台。如若本站内容侵犯了原著者的合法权益,可联系我们进行处理。