1200字范文,内容丰富有趣,写作的好帮手!
1200字范文 > 机器学习实战:基于概率论的分类方法:朴素贝叶斯(源码解析 错误分析)...

机器学习实战:基于概率论的分类方法:朴素贝叶斯(源码解析 错误分析)...

时间:2021-05-30 07:32:30

相关推荐

机器学习实战:基于概率论的分类方法:朴素贝叶斯(源码解析 错误分析)...

按照惯例,先把代码粘到这里

from numpy import *def LoadDataSet():postingList = [['my', 'dog', 'has', 'flea', 'problems', 'help', 'please'],['maybe', 'not', 'take', 'him', 'to', 'dog', 'park', 'stupid'],['my', 'dalmation', 'is', 'so', 'cute', 'I', 'love', 'him'],['stop', 'posting', 'stupid', 'worthless', 'garbage'],['mr', 'licks', 'ate', 'my', 'steak', 'how', 'to', 'stop', 'him'],['quit', 'buying', 'worthless', 'dog', 'food', 'stupid']]classVec = [0, 1, 0, 1, 0, 1] # 1 is abusive, 0 notreturn postingList, classVecdef CreateVocabList(dataSet):vocabSet = set([]) # create empty setfor document in dataSet:vocabSet = vocabSet | set(document) # union of the two setsreturn list(vocabSet)def SetOfWords2Vec(vocabList, inputSet):returnVec = [0] * len(vocabList)for word in inputSet:if word in vocabList:returnVec[vocabList.index(word)] = 1else:print ("the word: %s is not in my Vocabulary!" % word)return returnVecdef BagOfWords2VecMN(vocabList, inputSet):returnVec = [0] * len(vocabList)for word in inputSet:if word in vocabList:returnVec[vocabList.index(word)] += 1return returnVecdef trainNB0(trainMatrix, trainCategory):numTrainDocs = len(trainMatrix)numWords = len(trainMatrix[0])pAbusive = sum(trainCategory) / float(numTrainDocs)p0Num = ones(numWords)p1Num = ones(numWords) #change to ones() to avoid product to be zerop0Denom = 2.0; p1Denom = 2.0#change to 2.0for i in range(numTrainDocs):if trainCategory[i] == 1:p1Num += trainMatrix[i]p1Denom += sum(trainMatrix[i])else:p0Num += trainMatrix[i]p0Denom += sum(trainMatrix[i])p1Vect = log(p1Num/p1Denom)#change to log() to avoid down-overflowp0Vect = log(p0Num/p0Denom)#change to log()return p0Vect, p1Vect, pAbusivedef ClassifyNB(vec2classify, p0Vec, p1Vec, pClass1):p1 = sum(vec2classify * p1Vec) + log(pClass1)p0 = sum(vec2classify * p0Vec) + log(1.0 - pClass1)if p1 > p0:return 1else:return 0def TextParse(bigString): # input is big string, #output is word listimport relistOfTokens = re.split(r'\W*', bigString)return [tok.lower() for tok in listOfTokens if len(tok) > 2]def SpamTest():docList = []classList = []fullText = []for i in range(1, 26):wordList = TextParse(open('machinelearninginaction\Ch04\email\spam\%d.txt' % i).read())docList.append(wordList)fullText.extend(wordList)classList.append(1)wordList = TextParse(open('machinelearninginaction\Ch04\email\ham\%d.txt' % i).read())docList.append(wordList)fullText.extend(wordList)classList.append(0)vocabList = CreateVocabList(docList) # create vocabularytrainingSet = list(range(50))testSet = [] # create test setfor i in range(10):randIndex = int(random.uniform(0, len(trainingSet)))testSet.append(trainingSet[randIndex])del (trainingSet[randIndex])trainMat = []trainClasses = []for docIndex in trainingSet: # train the classifier (get probs) trainNB0trainMat.append(BagOfWords2VecMN(vocabList, docList[docIndex]))trainClasses.append(classList[docIndex])p0V, p1V, pSpam = trainNB0(array(trainMat), array(trainClasses))errorCount = 0for docIndex in testSet: # classify the remaining itemswordVector = BagOfWords2VecMN(vocabList, docList[docIndex])if ClassifyNB(array(wordVector), p0V, p1V, pSpam) != classList[docIndex]:errorCount += 1print("classification error", docList[docIndex])print('the error rate is: ', float(errorCount) / len(testSet))# return vocabList,fullText# def CalcMostFreq(vocabList, fullText):#import operator#freqDict = {}#for token in vocabList:# freqDict[token] = fullText.count(token)#sortedFreq = sorted(freqDict.items(), key=operator.itemgetter(1), reverse=True)#return sortedFreq[:30]# # # def localWords(feed1, feed0):#import feedparser#docList = [];#classList = [];#fullText = []#minLen = min(len(feed1['entries']), len(feed0['entries']))#for i in range(minLen):# wordList = TextParse(feed1['entries'][i]['summary'])# docList.append(wordList)# fullText.extend(wordList)# classList.append(1) # NY is class 1# wordList = TextParse(feed0['entries'][i]['summary'])# docList.append(wordList)# fullText.extend(wordList)# classList.append(0)#vocabList = CreateVocabList(docList) # create vocabulary#top30Words = calcMostFreq(vocabList, fullText) # remove top 30 words#for pairW in top30Words:# if pairW[0] in vocabList: vocabList.remove(pairW[0])#trainingSet = range(2 * minLen);#testSet = [] # create test set#for i in range(20):# randIndex = int(random.uniform(0, len(trainingSet)))# testSet.append(trainingSet[randIndex])# del (trainingSet[randIndex])#trainMat = [];#trainClasses = []#for docIndex in trainingSet: # train the classifier (get probs) trainNB0# trainMat.append(bagOfWords2VecMN(vocabList, docList[docIndex]))# trainClasses.append(classList[docIndex])#p0V, p1V, pSpam = trainNB0(array(trainMat), array(trainClasses))#errorCount = 0#for docIndex in testSet: # classify the remaining items# wordVector = bagOfWords2VecMN(vocabList, docList[docIndex])# if classifyNB(array(wordVector), p0V, p1V, pSpam) != classList[docIndex]:# errorCount += 1#print#'the error rate is: ', float(errorCount) / len(testSet)#return vocabList, p0V, p1V# # # def getTopWords(ny, sf):#import operator#vocabList, p0V, p1V = localWords(ny, sf)#topNY = [];#topSF = []#for i in range(len(p0V)):# if p0V[i] > -6.0: topSF.append((vocabList[i], p0V[i]))# if p1V[i] > -6.0: topNY.append((vocabList[i], p1V[i]))#sortedSF = sorted(topSF, key=lambda pair: pair[1], reverse=True)#print#"SF**SF**SF**SF**SF**SF**SF**SF**SF**SF**SF**SF**SF**SF**SF**SF**"#for item in sortedSF:# print# item[0]#sortedNY = sorted(topNY, key=lambda pair: pair[1], reverse=True)#print#"NY**NY**NY**NY**NY**NY**NY**NY**NY**NY**NY**NY**NY**NY**NY**NY**"#for item in sortedNY:# print# item[0]

编译过程中遇到的错误

错误1:

UnicodeDecodeError: 'gbk' codec can't decode byte 0xae in position 199: illegal multibyte sequence

这个错误意思是python无法读取某个txt文件,实为编码问题,解决方法是按照错误跳转到相应程序行,在spam或者ham中查找有问题的txt,看看是否编码错误。如果编码错误,可以手动修改编码,或者你的编译器可以自动选择编码模式,修改一下就可以了。

错误2:

TypeError: 'range' object doesn't support item deletion

trainingSet = range(50)

改为

trainingSet = list(range(50))

即可

关于机器学习的源代码以及数据集在这里:

/iamoldpan/article/details/78010329

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。