Sunday 28 April 2019

shiny1

ui = fluidPage(
  titlePanel("TABLE"),
  sidebarLayout(
    sidebarPanel(
      sliderInput("num", "integer", 1, 2000, 1,
                  step = 1, animate =
                    animationOptions(interval=400, loop=TRUE))),
    mainPanel(
      tableOutput("prod")
    )) )

server <- function(input, output) {
  output$prod <- renderPrint({ x<-input$num
  for(i in 1:10){
    a=x*i
    cat(x,"x",i,"=",a,"<br>")
  }})}

shinyApp(ui = ui, server = server)

NN


# creating training data set
TKS=c(20,10,30,20,80,30)
CSS=c(90,20,40,50,50,80)
Placed=c(1,0,0,0,1,1)

# Here, you will combine multiple columns or features into a single set of data
df=data.frame(TKS,CSS,Placed)

# load library
require(neuralnet)

# fit neural network
nn=neuralnet(Placed~TKS+CSS,data=df, hidden=3,act.fct = "logistic",
                linear.output = FALSE)

# plot neural network
plot(nn)

# creating test set
TKS=c(30,40,85)
CSS=c(85,50,40)

test=data.frame(TKS,CSS)

## Prediction using neural network

Predict=compute(nn,test)
Predict$net.result


# Converting probabilities into binary classes setting threshold level 0.5
prob <- Predict$net.result
pred <- ifelse(prob>0.5, 1, 0)

pred

LR

# Select some columns form mtcars.

input <- mtcars[,c("am","cyl","hp","wt")]
print(head(input))

input <- mtcars[,c("am","cyl","hp","wt")]
am.data = glm(formula = am ~ cyl + hp + wt, data = input, family = binomial)

print(summary(am.data))

Saturday 27 April 2019

COR



Text mining methods allow us to highlight the most frequently used keywords in a paragraph of texts. One can create a word cloud, also referred as text cloud or tag cloud, which is a visual representation of text data.
The procedure of creating word clouds is very simple in R if you know the different steps to execute. The text mining package (tm) and the word cloud generator package (wordcloud) are available in R for helping us to analyze texts and to quickly visualize the keywords as a word cloud.

Step 1 : Install and load the required packages
Type the R code below, to install and load the required packages:
# Install
install.packages("tm")  # for text mining
install.packages("SnowballC") # for text stemming
install.packages("wordcloud") # word-cloud generator
install.packages("RColorBrewer") # color palettes
# Load
library("tm")
library("SnowballC")
library("wordcloud")
library("RColorBrewer")

Step 2 : Text mining

load the text

# Read the text file from internet
filePath <- "http://www.sthda.com/sthda/RDoc/example-files/martin-luther-king-i-have-a-dream-speech.txt"
text <- readLines(filePath)
 
2.     Load the data as a corpus
# Load the data as a corpus
docs <- Corpus(VectorSource(text))
 
3.     Inspect the content of the document
inspect(docs)
 
 

Text transformation

Transformation is performed using tm_map() function to replace, for example, special characters from the text.
Replacing “/”, “@” and “|” with space:
toSpace <- content_transformer(function (x , pattern ) gsub(pattern, " ", x))
docs <- tm_map(docs, toSpace, "/")
docs <- tm_map(docs, toSpace, "@")
docs <- tm_map(docs, toSpace, "\\|")

Cleaning the text

the tm_map() function is used to remove unnecessary white space, to convert the text to lower case, to remove common stopwords like ‘the’, “we”.
The information value of ‘stopwords’ is near zero due to the fact that they are so common in a language. Removing this kind of words is useful before further analyses. For ‘stopwords’, supported languages are danish, dutch, english, finnish, french, german, hungarian, italian, norwegian, portuguese, russian, spanish and swedish. Language names are case sensitive.

# Convert the text to lower case
docs <- tm_map(docs, content_transformer(tolower))
 # Remove numbers
docs <- tm_map(docs, removeNumbers)
# Remove english common stopwords
 docs <- tm_map(docs, removeWords, stopwords("english"))
# Remove your own stop word # specify your stopwords as a character vector docs <- tm_map(docs, removeWords, c("blabla1", "blabla2"))
 # Remove punctuations
docs <- tm_map(docs, removePunctuation)
 # Eliminate extra white spaces
docs <- tm_map(docs, stripWhitespace)
# Text stemming # docs <- tm_map(docs, stemDocument)

Step 4 : Build a term-document matrix

Document matrix is a table containing the frequency of the words. Column names are words and row names are documents. The function TermDocumentMatrix() from text mining package can be used as follow :
dtm <- TermDocumentMatrix(docs)
m <- as.matrix(dtm)
v <- sort(rowSums(m),decreasing=TRUE)
d <- data.frame(word = names(v),freq=v)
head(d, 10)

      word freq
will         will   17
freedom   freedom   13
ring         ring   12
day           day   11
dream       dream   11
let           let   11
every       every    9
able         able    8
one           one    8
together together    7

Step 5 : Generate the Word cloud

The importance of words can be illustrated as a word cloud as follow :
set.seed(1234)
wordcloud(words = d$word, freq = d$freq, min.freq = 1,
          max.words=200, random.order=FALSE, rot.per=0.35, 
          colors=brewer.pal(8, "Dark2"))

word cloud and text mining, I have a dream speech from Martin Luther King
Explore frequent terms and their associations
You can have a look at the frequent terms in the term-document matrix as follow. In the example below we want to find words that occur at least four times :
findFreqTerms(dtm, lowfreq = 4)
 [1] "able"     "day"      "dream"    "every"    "faith"    "free"     "freedom"  "let"      "mountain" "nation" 
[11] "one"      "ring"     "shall"    "together" "will"   
You can analyze the association between frequent terms (i.e., terms which correlate) using findAssocs() function. The R code below identifies which words are associated with “freedom” in I have a dream speech :
findAssocs(dtm, terms = "freedom", corlimit = 0.3)
$freedom
         let         ring  mississippi mountainside        stone        every     mountain        state
        0.89         0.86         0.34         0.34         0.34         0.32         0.32         0.32
The frequency table of words
head(d, 10)
             word freq
will         will   17
freedom   freedom   13
ring         ring   12
day           day   11
dream       dream   11
let           let   11
every       every    9
able         able    8
one           one    8
together together    7
Plot word frequencies
The frequency of the first 10 frequent words are plotted :
barplot(d[1:10,]$freq, las = 2, names.arg = d[1:10,]$word,
        col ="lightblue", main ="Most frequent words",
        ylab = "Word frequencies")
word cloud and text mining


ML-RV

input <- mtcars[,c("mpg","disp","hp","wt")]
print(head(input))

model <- lm(mpg~disp+hp+wt, data = input)

a <- coef(model)[1]
print(a)

Xdisp <- coef(model)[2]
Xhp <- coef(model)[3]
Xwt <- coef(model)[4]
print(Xdisp)
print(Xhp)
print(Xwt)


Create Equation for Regression Model

Based on the above intercept and coefficient values, we create the mathematical equation.
Y = a+Xdisp.x1+Xhp.x2+Xwt.x3
or
Y = 37.15+(-0.000937)*x1+(-0.0311)*x2+(-3.8008)*x3

Apply Equation for predicting New Values

We can use the regression equation created above to predict the mileage when a new set of values for
displacement, horse power and weight is provided.

For a car with disp = 221, hp = 102 and wt = 2.91 the predicted mileage is −
Y = 37.15+(-0.000937)*221+(-0.0311)*102+(-3.8008)*2.91 = 22.7104

Tuesday 23 April 2019

EVM

pragma solidity ^0.4.21;
contract Election{
    struct Canditate{
        string name;
        uint voteCount;
    }
    struct Voter{
        bool authorized;
        bool voted;
        uint vote;
    }
   

currency

pragma solidity ^0.4.0;
contract SimpleStorage{
    uint a;
    uint shop_1=0;
    uint shop_2=0;
    function initial( uint balu)public{
        a = balu;
    }
     function Shop1( uint balu)public{
        a = a-balu;
        shop_1=shop_1+balu;
    }
     function Shop2(uint balu)public{
          a = a-balu;
          shop_2=shop_2+balu;
    }
   
     function add(uint balu)public{
          a = a+balu;
    }
    function balance()public constant returns(uint){
        return a;
    }
     function shop_1balance()public constant returns(uint){
        return shop_1;
    } function shop_2balance()public constant returns(uint){
        return shop_2;
    }
   
}

Monday 15 April 2019

DHT11

// Robo India Tutorial
// Simple code upload the tempeature and humidity data using thingspeak.com
// Hardware: NodeMCU,DHT11

#include <DHT.h>  // Including library for dht

#include <ESP8266WiFi.h>

String apiKey = "10OEANZL1113P0WS9JF";     //  Enter your Write API key from ThingSpeak

const char *ssid =  "Connectify-me";     // replace with your wifi ssid and wpa2 key
const char *pass =  "123456";
const char* server = "api.thingspeak.com";

#define DHTPIN 0          //pin where the dht11 is connected//d3

DHT dht(DHTPIN, DHT11);

WiFiClient client;

void setup()
{
       Serial.begin(115200);
       delay(10);
       dht.begin();

       Serial.println("Connecting to ");
       Serial.println(ssid);

       WiFi.begin(ssid, pass);

      while (WiFi.status() != WL_CONNECTED)
     {
            delay(500);
            Serial.print(".");
     }
      Serial.println("");
      Serial.println("WiFi connected");
     
}

void loop()
{

      float h = dht.readHumidity();
      float t = dht.readTemperature();
   
              if (isnan(h) || isnan(t))
               {
                     Serial.println("Failed to read from DHT sensor!");
                      return;
                 }

                         if (client.connect(server,80))   //   "184.106.153.149" or api.thingspeak.com
                      {
                         
                             String postStr = apiKey;
                              postStr +="&field1=";
                             postStr += String(t);
                             postStr +="&field2=";
                             postStr += String(h);
                             postStr += "\r\n\r\n";

                             client.print("POST /update HTTP/1.1\n");
                             client.print("Host: api.thingspeak.com\n");
                             client.print("Connection: close\n");
                              client.print("X-THINGSPEAKAPIKEY: "+apiKey+"\n");
                             client.print("Content-Type: application/x-www-form-urlencoded\n");
                             client.print("Content-Length: ");
                             client.print(postStr.length());
                             client.print("\n\n");
                             client.print(postStr);

                             Serial.print("Temperature: ");
                             Serial.print(t);
                               Serial.print(" degrees Celcius, Humidity: ");
                             Serial.print(h);
                             Serial.println("%. Send to Thingspeak.");
                        }
          client.stop();
 Serial.println("Waiting...");

  // thingspeak needs minimum 15 sec delay between updates, i've set it to 30 seconds
  delay(10000);
}