Why am i getting only 52 % accuracy?

Here is the code.


CSV FILE:
https://puu.sh/EhwXz/8c5040da5b.csv

Hi,
originally some jokes have quotes around them and some don’t have.
But in your result no joke is having quotes. This might be the reason for less accuracy.
Plus you are doing some lengthy work also. Check this out for a better version of the code.


import requests
import json
import pandas as pd

url="http://api.icndb.com/jokes/json?"

parameters={
"value":""
}

r=requests.get(url,params=parameters)

p=r.content
pa=json.loads(p)

m=pa["value"]

test = pd.read_csv("Test/ID.csv").values
test = test.reshape(-1,)

all_jokes = []
for i in range(len(m)):
    if(m[i]['id'] in test):      
        joke=m[i]['joke']
        all_jokes.append(joke)

        
df = pd.DataFrame({'ID': test, "Joke": all_jokes})
df.to_csv("filename.csv",index=False)

Thanks :slight_smile:

@mohituniyal2010
will you tell me what the file Test/ID.csv contain
what is it used for
and i am not getting how actually input is taken plz clarify
i have made csv file containing all id and jokes but i have no idea what to do next
plz reply asap

First thing you don’t need to make csv of all the ids and jokes.
You have to keep only those jokes whose corresponding id is given in ID.csv file
This file will look something like this

So what i did in the code is first to get all the jokes from the website. But while making the csv I only considered the relevants jokes (IDs given in that file)

I hope this is clear now

one thing more just tell me from where do you get the ID.csv file which you were reading in code
plz reply

the only thing i am not getting is “test = pd.read_csv(“Test/ID.csv”).values”
from where this (“Test/ID.csv”) comes

Download the test cases from the portal… You will see a big button named as “Test Cases”.

thank you so much @mohituniyal2010

plz tell me how to submit the csv file i have made
what i did is first uploaded the file and then i pressed the the submit button but nothing happens
what should i do

Just wait for 1,2 days.
Tech team is resolving this issue.
Will inform you when this is resolved.