Datathons Solutions

PMI

1
votes

NFT Datathon 2022

Mentor(Alexandar Efremov) 

Team(Daniel Pavlov, Martin Nenov, Aleksandar Svinarov)

Technology we use:

-PyCharm

-CoLAb

What was our approach:

Our approach with the NFT sales dataset and the NFT traits type dataset:

 

NFT traits dataset:

we made a function to generate a new trait dataset containing only the rarity score for each trait on each NFT. We can later use this data to train our model

NFT sales dataset:

We wanted to add more features to our sales dataset so we added a ‘time_diff’  feature which gives information about how much time has elapsed before the next transaction takes place.

 

we added a ‘price_diff’ feature that gives us information abut the difference between each transaction.

Filtering of sales dataset:

we made a function to filter and remove 3 consecutive transactions with a total sum of  ‘time_diff’ more or equal to 24 hours which we think would be bad for the training model.

We removed transactions with zero loss/profit.

the python file that includes our functions for filtering and generating new datasets:

main

 

Data Prep

In our data preparation we will consider some data from the filtered and improved NFT sales dataset and the newly generated NFT rarity score dataset.

what data we decided to exclude from the NFT sales dataset:  hash, from and to address currency for ETH and amount because we will use amountUSD.

we will make use of the from and to addresses by using the  newly added feature ‘time_diff’.

features will will use: timestamp, token_id,  gas_price,block_number, amountUSD,time_diff,price_diff, and( rarity score for each trait)

We merge the data from the rarity score dataset and the new sales dataset into a new dataset that will feed into our models.

What was our approach with training and testing models:

 

the dataset will be using:

 

First we use a MinMaxScaler to scale our data.

 

we checked the correlation between our features and it does not look good.

 

we have a function so we can test and evaluate the performance of each model and decide what is the best approach:

def get_model_score(pipeline, X_train, X_test, y_train, y_test):
    pipeline.fit(X_train, y_train)
    cv = cross_val_score(pipeline, X_train, y_train, scoring = ‘r2’, cv = 5)
    cv_score = cv.mean()
    r2_score_train = r2_score(y_train, pipeline.predict(X_train))
    r2_score_test = r2_score(y_test, pipeline.predict(X_test))
    mse_train = mean_absolute_error(y_train, pipeline.predict(X_train))
    mse_test = mean_absolute_error(y_test, pipeline.predict(X_test))
    print(“Mean CV score: “,cv_score)
    print(“R2 score_train: “, r2_score_train)
    print(“R2 score_test: “, r2_score_test)
    print(“Train Mean Squeare Error: “, mse_train)
    print(“Test Mean Squeare Error: “, mse_test)
the models we test are :
ElasticNet():
Mean CV score: 0.13999108115749495
R2 score_train: 0.13605417499767947
R2 score_test: 0.14849652361088272
Train Mean Squeare Error: 11722.952243577003
Test Mean Squeare Error: 11602.287294382337
LassoCV():
Mean CV score: 0.8337568928629437
R2 score_train: 0.8377181570986998
R2 score_test: 0.863329214243554
Train Mean Squeare Error: 3955.4001216944985
Test Mean Squeare Error: 3837.4653307966028
Ridge():
Mean CV score: 0.8249936957827941
R2 score_train: 0.8309548531829719
R2 score_test: 0.8539838822944252
Train Mean Squeare Error: 4116.7952561342345
Test Mean Squeare Error: 3988.2573285269737
DecisionTreeRegressor():
Mean CV score: 0.8644925549076989
R2 score_train: 1.0
R2 score_test: 0.8700024228591667
Train Mean Squeare Error: 1.3105354548027381e-09
Test Mean Squeare Error: 1989.5750561674213
RandomForrestRegressor
we use our function to evaluate our model:
RESULT:
Mean CV score: 0.9150824025020732
R2 score_train: 0.9895902048948938
R2 score_test: 0.9392183606699344
Train Mean Squeare Error: 579.0126696967735
Test Mean Squeare Error: 1456.3965730645527
RESULTS ANALYSIS
We get the best results with RandomForrestRegressor
we decide to try to improve our model by using GridSearch
gsRFR = GridSearchCV(RandomForestRegressor(),paramsRFR,scoring=’r2′,n_jobs=5)
we train our model:
COMPARING RESULTS:
link to jypiter:
FULL RESULTS:testRealValue and predicted values prediction

What we can improve:

 

 

Share this

5 thoughts on “PMI

  1. 0
    votes

    A few questions, I think is good to address in the article and presentation:
    Rarity: Can you include your approach to the rarity score?
    Features: Where do the values for ‘Ear’, ‘Hair’ etc. come from?
    Results: can you do some sort of visual comparison between test prediction and ground truth? Also, how would you approach the confidence range of your predictions? How do your prediction’s confidences correlate with how good the prediction is?

    Huge thumbs up! I really liked the systematic approach in choosing the “right” model.

    Further suggestions: you might want to explore how different test/train splits affect your model (very similar idea to CV, but in a sense to test the actual test set which ideally should be curated.

  2. 0
    votes

    It is good that you tried different algorithms, it is however not clear how was the data split in train / test, based on the results it looks like there is a data leakage between train – test sets.

  3. 0
    votes

    The data cleaning is important and it is good that you focused in that. Also the generation of the additional derived variables based on the time stamp and 
    And also I like that you trained different models.
    Finally, the very high R2 shows that there is something wrong. 🙂
    Also as I understood you predict the price of all nft-s. What about price forecast of single nft?

Leave a Reply