Posts

Showing posts from 2019

How to use global state or variable in Vue.js (Simplest Vue.js Tutorial)

Image
We love Reactjs, but it is indeed too complex when you need to call a global variable. So can Vue.js make it more simple than redux? Let’s have a look together! Before we start First I want you to have a basic understanding of Vue, you need to know how to use  Vue CLI  to create an empty project. Then you have to understand the basic structure of a Vue project, I mean template, script, CSS. Finally, I need you to take a look at  https://medium.com/js-dojo/vuex-2638ba4b1d76 Basic Concepts State It’s nothing but as same as the Reactjs state. It’s a global variable, where saves the data that affect the view of your website. Mutations In this section, you define those scripts that change the  state  . Actions You should put every asynchronous operation in it, for example:  fetch API  . Read More Actions | Vuex Actions are similar to mutations, the differences being that: Instead of mutating the state, actions c

Knowing current web video tech by one article

https://medium.com/canal-tech/how-video-streaming-works-on-the-web-an-introduction-7919739f7e1

Read our user’s camera with Reactjs and Typescript

Setup: yarn create react-app yingshaoxo --typescript Method1: import React from 'react'; import './App.css'; const App: React.FC = () => { return ( <div className="App"> <header className="App-header"> <Mirror></Mirror> <p> yingshaoxo is your father. </p> </header> </div> ); } interface Props { } interface State { }; class Mirror extends React.Component<Props, State> { video_reference = React.createRef<HTMLVideoElement>() async componentDidMount() { if (this.video_reference.current) { let video_stream: MediaStream = await navigator.mediaDevices.getUserMedia({ video: { facingMode: "user" }, audio: false }) this.video_reference.current.srcObject = video_stream } } render() { return ( <vide

How to download tensorflow at China (how to download big file in china)

You know, they have set a GFW. So you probably can not install it directly from pypi by `pip3 install tensorflow`. But you can use `wget -c **.wheel` to download that package, then install it with `pip3 install **.wheel`. `-c` in `wget` means `auto resume downloading`.

One practical reason that causes your raspberry pi can't connect to your WI-FI automatically. (The reason why your raspberry pi can't run when you using GPIO power supply)

You should use your Voltmeter to check if your power supply less than 5V, if so, raspberry pi can not run. ___ Of cause, if you use a USB cable to power your raspberry pi, nothing will be wrong.

PX4 SITL MavSDK with jmavsim (Forward UDP for MavLink to work with Jmavsim and Raspberry-Pi)

1. install MavProxy sudo pip3 install MAVProxy 2. run jmavsim make px4_sitl_default jmavsim 3. at the computer where you run jmavsim mavproxy.py --master=udp:127.0.0.1:14540 --out=udp:192.168.43.7:14540 we assume the raspberry_pi ip address is `192.168.43.7` 4. in the raspberry_pi, you run the code with: await drone.connect(system_address="udp://:14540") we assume that you have already compiled and installed  MAVSDK-Python

Bytes Conversion in Python3

``` import binascii def bytes_to_hex(a_byte, length=2):     return str(binascii.hexlify(a_byte))[2:-1] def hex_to_bytes(hex_string):     return binascii.unhexlify(hex_string) def int_to_hex(integer, length=None):     if length != None:         hex_string = ('{:0'+str(length)+'X}').format(integer)     else:         hex_string = hex(integer)[2:]         if (len(hex_string) % 2 == 1):             hex_string = "0" + hex_string     return hex_string def hex_to_int(hex_string):     return int(hex_string, 16) def int_to_bytes(integer, length=None):     hex_string = int_to_hex(integer, length)     return hex_to_bytes(hex_string) def bytes_to_int(a_byte):     hex_string = bytes_to_hex(a_byte)     return hex_to_int(hex_string) def text_to_hex(text):     length = len(text) * 2     bytes_ = binascii.hexlify(text.encode("ascii", "ignore"))     result = str(bytes_)     result = result[2:][:-1]     return result `

Use Clang-format with vscode in Arch Linux

1. install clang-format executable `sudo pacman -S clang` 2. generate configurations for clang-format `/usr/bin/clang-format -style=llvm -dump-config > ~/.clang-format` 3. make some changes at `~/.clang-format` 4. install vscode `clang-format` extension

Switch to Arch Linux for working (on ML)

Arch is good, I mean, you have full control over it. And for me, I don't have much time to get deep into it. So I just installed manjaro instead (https://manjaro.org/download/gnome/). It turns out great for using. (The best feature I like is when you press Win+Arrow, it will resize your GUI window to the left or right of your screen.) If you want to use it for machine learning, it's also quite simple, just use `sudo pacman -Syu python-tensorflow-cuda`. Then you'll have everything installed on your computer, including GPU supporting for tensorflow. ____ pacman = package manager Use `pacman -S gvim` to install gvim. ____ If you have any trouble in letting Nvidia driver work in your manjaro system, check this: https://forum.manjaro.org/t/howto-set-up-prime-with-nvidia-proprietary-driver/40225

Repeat boring work with Vim

1. q + (abcdefghijklmnopqrstuvwxyz) Start to record a serious of operations 2. q Stop recording 3. @ + (abcdefghijklmnopqrstuvwxy) To repeat what you have recorded 4. @@ Repeat the last replayed recording ____ Reference: vim record Author: yingshaoxo

Speech Recognition with Python3

### main.py ``` # https://www.codesofinterest.com/2017/03/python-speech-recognition-pocketsphinx.html import speech_recognition as sr # obtain audio from the microphone r = sr.Recognizer() with sr.Microphone() as source:     print("Please wait. Calibrating microphone...")     r.adjust_for_ambient_noise(source, duration=5)     while 1:         print("\nSay something!")         audio = r.listen(source)         # recognize speech using Sphinx         try:             #print("Google thinks you said '" + r.recognize_google(audio, language="zh-CN") + "'")             print("Sphinx thinks you said '" + r.recognize_sphinx(audio, language="en-US") + "'")         except sr.UnknownValueError:             print("Sphinx could not understand audio")         except sr.RequestError as e:             print("Sphinx error; {0}".format(e)) ``` __________________

Use Bazel and Gtest

Why Bazel? Because  cmake  or  make  is too hard to learn. How to learn Bazel? Follow its instruction on official website: {% embed url=" https://docs.bazel.build/versions/master/tutorial/cpp.html " %} The  binary BUILD file  may looks like this: cc_binary( name = "hello-world", srcs = ["hello-world.cc"], visibility = ["//visibility:public"] ) The  library BUILD file  may looks like this: cc_library( name = "hello-lib", srcs = ["hello-lib.cc"], hdrs = ["hello-lib.h"], visibility = ["//visibility:public"] ) How to use Gtest with Bazel? {% embed url=" https://docs.bazel.build/versions/master/cpp-use-cases.html\#including-external-libraries " %} The final folder structure may looks like this: ├── external │   └── gtest.BUILD ├── lib │   ├── BUILD │   ├── hello-lib.cc │   └── hello-lib.h ├── main │   ├── BUILD │   └── hello-world.cc ├── test │   ├── BU

Use Nuitka to compile Python Codes to Binary file

pip install nuitka patchelf ordered-set zstandard python3 -m nuitka --follow-imports --standalone --onefile  --remove-output --output-filename="program.run" main.py # or python3 -m nuitka --follow-imports --standalone --output-dir="excutable" main.py Why I say Python is better than C/C++ Python is made for human (codes writing like English speaking) Python can be running at every platform after you compile it to binary file by using Cython or Nuitka Python has countless packages also was intended for humans How to compile Python codes to binary? 1. Let’s say you have a package in this structure: └── auto_everything ├── __init__.py ├── base.py ├── video.py └── web.py 2. After Nuitka was installed, run the following commands: cd .. python3 -m nuitka --module auto_everything --output-dir=outputs or cd .. python3 -m nuitka --module auto_everything --include-package=auto_everything.base,auto_everything.video,auto_everything

Mastering iptables, ip (iproute2)

After you enabled your hotspot and VPN on your android phone, the following commands could be used to let all devices who connected to your hotspot to have the ability to enjoy the VPN without any further work. ``` iptables -t filter -F FORWARD iptables -t nat -F POSTROUTING iptables -t filter -A FORWARD -j ACCEPT iptables -t nat -A POSTROUTING -j MASQUERADE ip rule add from 192.168.43.0/24 lookup 61 ip rule add from 192.168.42.0/24 lookup 61 ip route add default dev tun0 scope link table 61 ip route add 192.168.43.0/24 dev wlan0 scope link table 61 ip route add 192.168.42.0/24 dev wlan0 scope link table 61 ip route add broadcast 255.255.255.255 dev wlan0 scope link table 61 ip route add 172.27.232.0/24 dev tun0 table 61 ``` ___ ``` iptables -t filter -F FORWARD ``` * -t means `table` * -F means `flush`.  flush = clear = delete > Filter: filter is the default table. Its built-in chains are Input, Forward, Output Delete all rules at the `Forward Chain`