Monday 27 May 2019

code for AI

import numpy as np
import cv2

# multiple cascades: https://github.com/Itseez/opencv/tree/master/data/haarcascades

#https://github.com/Itseez/opencv/blob/master/data/haarcascades/haarcascade_frontalface_default.xml
face_cascade = cv2.CascadeClassifier('D:\\opencv\\OpenCV Final Softwares\\opencv\\sources\\data\\haarcascades\\haarcascade_frontalface_default.xml')
#https://github.com/Itseez/opencv/blob/master/data/haarcascades/haarcascade_eye.xml
eye_cascade = cv2.CascadeClassifier('D:\\opencv\\OpenCV Final Softwares\\opencv\\sources\\data\\haarcascades\\haarcascade_eye.xml')

cap = cv2.VideoCapture(0)

while 1:
    ret, img = cap.read()
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    faces = face_cascade.detectMultiScale(gray, 1.3, 5)
    for (x,y,w,h) in faces:
      cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
      roi_gray = gray[y:y+h, x:x+w]
      roi_color = img[y:y+h, x:x+w]
      eyes = eye_cascade.detectMultiScale(roi_gray)
      for(ex,ey,ew,eh) in eyes:
            cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)

    cv2.imshow('img',img)
    k = cv2.waitKey(30) & 0xff
    if k == 27:
        break

cap.release()
cv2.destroyAllWindows()

Saturday 25 May 2019

Text Mining


Step 1 : Install and load the required packages
Type the R code below, to install and load the required packages:
# Install
install.packages("tm")  # for text mining
install.packages("SnowballC") # for text stemming
install.packages("wordcloud") # word-cloud generator
install.packages("RColorBrewer") # color palettes
# Load
library("tm")
library("SnowballC")
library("wordcloud")
library("RColorBrewer")

Step 2 : Text mining

load the text

# Read the text file from internet
filePath <- "http://www.sthda.com/sthda/RDoc/example-files/martin-luther-king-i-have-a-dream-speech.txt"
text <- readLines(filePath)
 
2.     Load the data as a corpus
# Load the data as a corpus
docs <- Corpus(VectorSource(text))
 
3.     Inspect the content of the document
inspect(docs)
 
 

Text transformation

Transformation is performed using tm_map() function to replace, for example, special characters from the text.
Replacing “/”, “@” and “|” with space:
toSpace <- content_transformer(function (x , pattern ) gsub(pattern, " ", x))
docs <- tm_map(docs, toSpace, "/")
docs <- tm_map(docs, toSpace, "@")
docs <- tm_map(docs, toSpace, "\\|")

Cleaning the text

the tm_map() function is used to remove unnecessary white space, to convert the text to lower case, to remove common stopwords like ‘the’, “we”.
The information value of ‘stopwords’ is near zero due to the fact that they are so common in a language. Removing this kind of words is useful before further analyses. For ‘stopwords’, supported languages are danish, dutch, english, finnish, french, german, hungarian, italian, norwegian, portuguese, russian, spanish and swedish. Language names are case sensitive.

# Convert the text to lower case
docs <- tm_map(docs, content_transformer(tolower))
 # Remove numbers
docs <- tm_map(docs, removeNumbers)
# Remove english common stopwords
 docs <- tm_map(docs, removeWords, stopwords("english"))
# Remove your own stop word # specify your stopwords as a character vector docs <- tm_map(docs, removeWords, c("blabla1", "blabla2"))
 # Remove punctuations
docs <- tm_map(docs, removePunctuation)
 # Eliminate extra white spaces
docs <- tm_map(docs, stripWhitespace)
# Text stemming # docs <- tm_map(docs, stemDocument)

Step 4 : Build a term-document matrix

Document matrix is a table containing the frequency of the words. Column names are words and row names are documents. The function TermDocumentMatrix() from text mining package can be used as follow :
dtm <- TermDocumentMatrix(docs)
m <- as.matrix(dtm)
v <- sort(rowSums(m),decreasing=TRUE)
d <- data.frame(word = names(v),freq=v)
head(d, 10)

      word freq
will         will   17
freedom   freedom   13
ring         ring   12
day           day   11
dream       dream   11
let           let   11
every       every    9
able         able    8
one           one    8
together together    7

Step 5 : Generate the Word cloud

The importance of words can be illustrated as a word cloud as follow :
set.seed(1234)
wordcloud(words = d$word, freq = d$freq, min.freq = 1,
          max.words=200, random.order=FALSE, rot.per=0.35, 
          colors=brewer.pal(8, "Dark2"))

word cloud and text mining, I have a dream speech from Martin Luther King
Explore frequent terms and their associations
You can have a look at the frequent terms in the term-document matrix as follow. In the example below we want to find words that occur at least four times :
findFreqTerms(dtm, lowfreq = 4)
 [1] "able"     "day"      "dream"    "every"    "faith"    "free"     "freedom"  "let"      "mountain" "nation" 
[11] "one"      "ring"     "shall"    "together" "will"   
You can analyze the association between frequent terms (i.e., terms which correlate) using findAssocs() function. The R code below identifies which words are associated with “freedom” in I have a dream speech :
findAssocs(dtm, terms = "freedom", corlimit = 0.3)
$freedom
         let         ring  mississippi mountainside        stone        every     mountain        state
        0.89         0.86         0.34         0.34         0.34         0.32         0.32         0.32
The frequency table of words
head(d, 10)
             word freq
will         will   17
freedom   freedom   13
ring         ring   12
day           day   11
dream       dream   11
let           let   11
every       every    9
able         able    8
one           one    8
together together    7
Plot word frequencies
The frequency of the first 10 frequent words are plotted :
barplot(d[1:10,]$freq, las = 2, names.arg = d[1:10,]$word,
        col ="lightblue", main ="Most frequent words",
        ylab = "Word frequencies")
word cloud and text mining


shiny1

library(shiny)
ui <- fluidPage(
   titlePanel("TABLE"),
    sidebarLayout(
      sidebarPanel(
        sliderInput("num", "integer", 1, 20, 1,
                    step = 1, animate =
                      animationOptions(interval=400, loop=TRUE))),
      mainPanel(
        tableOutput("prod")
      )) )
 
server <- function(input, output) {
    output$prod <- renderPrint({ x<-input$num
    for(i in 1:10){
      a=x*i
      cat(x,"x",i,"=",a,"<br>")
       }})}

shinyApp(ui = ui, server = server)

OD from Image


!pip install tensorflow numpy scipy pillow matplotlib h5py keras
!pip install opencv-python




from imageai.Detection import ObjectDetection
import os

execution_path = os.getcwd()

detector = ObjectDetection()
detector.setModelTypeAsRetinaNet()
detector.setModelPath( os.path.join(execution_path , "resnet50_coco_best_v2.0.1.h5"))
detector.loadModel()
detections = detector.detectObjectsFromImage(input_image=os.path.join(execution_path , "image.png"), output_image_path=os.path.join(execution_path , "image2new.jpg"), minimum_percentage_probability=30)

for eachObject in detections:
    print(eachObject["name"] , " : ", eachObject["percentage_probability"])
    print("--------------------------------")

OCV2

import numpy as np
import cv2

# multiple cascades: https://github.com/Itseez/opencv/tree/master/data/haarcascades

#https://github.com/Itseez/opencv/blob/master/data/haarcascades/haarcascade_frontalface_default.xml
face_cascade = cv2.CascadeClassifier('D:\\opencv\\OpenCV Final Softwares\\opencv\\sources\\data\\haarcascades\\haarcascade_frontalface_default.xml')
#https://github.com/Itseez/opencv/blob/master/data/haarcascades/haarcascade_eye.xml
eye_cascade = cv2.CascadeClassifier('D:\\opencv\\OpenCV Final Softwares\\opencv\\sources\\data\\haarcascades\\haarcascade_eye.xml')

cap = cv2.VideoCapture(0)

while 1:
    ret, img = cap.read()
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    faces = face_cascade.detectMultiScale(gray, 1.3, 5)
    for (x,y,w,h) in faces:
      cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
      roi_gray = gray[y:y+h, x:x+w]
      roi_color = img[y:y+h, x:x+w]
      eyes = eye_cascade.detectMultiScale(roi_gray)
      for(ex,ey,ew,eh) in eyes:
            cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)

    cv2.imshow('img',img)
    k = cv2.waitKey(30) & 0xff
    if k == 27:
        break

cap.release()
cv2.destroyAllWindows()

OCV1


from matplotlib import pyplot as plt
import cv2

img = cv2.imread(r'C:\Users\Manish\Desktop\VNR CDC\Day 6\OpenCV\NTR.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)

plt.imshow(gray)
plt.title('my picture')
plt.show()

_________________________________________________

import cv2

img = cv2.imread("lena.jpg",0)

ret, th = cv2.threshold(img, 125, 255, cv2.THRESH_BINARY)
ret, th1 = cv2.threshold(img, 125, 255, cv2.THRESH_BINARY_INV)

cv2.imshow("Hello World1",img)
cv2.imshow("Hello World2",th)
cv2.imshow("Hello World3",th1)

cv2.waitKey(0)

cv2.destroyAllWindows()
_____________________________________________________



Friday 24 May 2019

MLP NN

import pandas as pd

# Location of dataset
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data"

# Assign colum names to the dataset
names = ['sepal-length', 'sepal-width', 'petal-length', 'petal-width', 'Class']

# Read dataset to pandas dataframe
irisdata = pd.read_csv(url, names=names) 
In [2]:
irisdata.head()  
Out[2]:
sepal-lengthsepal-widthpetal-lengthpetal-widthClass
05.13.51.40.2Iris-setosa
14.93.01.40.2Iris-setosa
24.73.21.30.2Iris-setosa
34.63.11.50.2Iris-setosa
45.03.61.40.2Iris-setosa
In [3]:
# Assign data from first four columns to X variable
X = irisdata.iloc[:, 0:4]

# Assign data from first fifth columns to y variable
y = irisdata.select_dtypes(include=[object])  
In [4]:
y.head() 
Out[4]:
Class
0Iris-setosa
1Iris-setosa
2Iris-setosa
3Iris-setosa
4Iris-setosa
In [5]:
y.Class.unique()  
Out[5]:
array(['Iris-setosa', 'Iris-versicolor', 'Iris-virginica'], dtype=object)
In [6]:
from sklearn import preprocessing  
le = preprocessing.LabelEncoder()

y = y.apply(le.fit_transform)  
In [7]:
y
Out[7]:
Class
00
10
20
30
40
50
60
70
80
90
100
110
120
130
140
150
160
170
180
190
200
210
220
230
240
250
260
270
280
290
......
1202
1212
1222
1232
1242
1252
1262
1272
1282
1292
1302
1312
1322
1332
1342
1352
1362
1372
1382
1392
1402
1412
1422
1432
1442
1452
1462
1472
1482
1492
150 rows × 1 columns
In [8]:
from sklearn.model_selection import train_test_split  
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.20)  
In [9]:
from sklearn.preprocessing import StandardScaler  
scaler = StandardScaler()  
scaler.fit(X_train)

X_train = scaler.transform(X_train)  
X_test = scaler.transform(X_test)  
In [10]:
from sklearn.neural_network import MLPClassifier  
mlp = MLPClassifier(hidden_layer_sizes=(10, 10, 10), max_iter=1000)  
mlp.fit(X_train, y_train.values.ravel())  
Out[10]:
MLPClassifier(activation='relu', alpha=0.0001, batch_size='auto', beta_1=0.9,
       beta_2=0.999, early_stopping=False, epsilon=1e-08,
       hidden_layer_sizes=(10, 10, 10), learning_rate='constant',
       learning_rate_init=0.001, max_iter=1000, momentum=0.9,
       nesterovs_momentum=True, power_t=0.5, random_state=None,
       shuffle=True, solver='adam', tol=0.0001, validation_fraction=0.1,
       verbose=False, warm_start=False)
In [11]:
predictions = mlp.predict(X_test)  
In [12]:
predictions
Out[12]:
array([0, 2, 1, 2, 0, 0, 2, 2, 2, 0, 1, 1, 0, 1, 1, 2, 0, 0, 2, 1, 1, 1,
       1, 0, 1, 1, 2, 2, 1, 0])
In [13]:
#Evaluating the Algorithm
from sklearn.metrics import classification_report, confusion_matrix  
print(confusion_matrix(y_test,predictions))  
print(classification_report(y_test,predictions))
[[ 9  0  0]
 [ 0 12  1]
 [ 0  0  8]]
             precision    recall  f1-score   support

          0       1.00      1.00      1.00         9
          1       1.00      0.92      0.96        13
          2       0.89      1.00      0.94         8

avg / total       0.97      0.97      0.97        30