CAVEDU教育團隊在四月份時,曾報導過一位劉俊民老師 ,他因自身成長背景的緣故,特別樂意幫助偏鄉的孩童學習機器人教育。而這一篇文章,我們要介紹一個小創客平台「BARTER X BARTER」,這個平台成立的主要目的,就是幫助教育資源不足地區的孩童,讓他們可透過繪畫⋯⋯等等的自造作品,在平台上許願,好使圓夢天使(意指更有能力的大人)實現他們的願望。
首先,讓讀者們看一段「BARTER X BARTER」平台的介紹影片:
讀者們大概會好奇,究竟這個平台如何運作呢?其實他的運作模式相當簡單,基本上是「以物易物」,就是小朋友在平台註冊一個帳號,上傳個人的作品;小朋友可以針對這個作品許一個合理的願望,亦即一個想要兌換的物品;之後讓圓夢天使(有能力的大人)幫助小朋友實現願望。當然,這只是平台運作最基本的概念、架構,「BARTER X BARTER」已經為平台設計好一套更為詳細、周延的規範,請參考這裡。而平台也會幫助小朋友們,一起思考作品轉換成產品的可能性,提升作品價值,讓小朋友從小就培養出自力更生的想法和技能。
We’re excited to introduce TensorFlow.js, an open-source library you can use to define, train, and run machine learning models entirely in the browser, using Javascript and a high-level layers API. If you’re a Javascript developer who’s new to ML, TensorFlow.js is a great way to begin learning. Or, if you’re a ML developer who’s new to Javascript, read on to learn more about new opportunities for in-browser ML. In this post, we’ll give you a quick overview of TensorFlow.js, and getting started resources you can use to try it out.
Running machine learning programs entirely client-side in the browser unlocks new opportunities, like interactive ML! If you’re watching the livestream for the TensorFlow Developer Summit, during the TensorFlow.js talk you’ll find a demo where @dsmilkov and @nsthorat train a model to control a PAC-MAN game using computer vision and a webcam, entirely in the browser. You can try it out yourself, too, with the link below — and find the source in the examples folder.
ML running in the browser means that from a user’s perspective, there’s no need to install any libraries or drivers. Just open a webpage, and your program is ready to run. In addition, it’s ready to run with GPU acceleration. TensorFlow.js automatically supports WebGL, and will accelerate your code behind the scenes when a GPU is available. Users may also open your webpage from a mobile device, in which case your model can take advantage of sensor data, say from a gyroscope or accelerometer. Finally, all data stays on the client, making TensorFlow.js useful for low-latency inference, as well as for privacy preserving applications.
If you’re developing with TensorFlow.js, here are three workflows you can consider.
You can import an existing, pre-trained model for inference. If you have an existing TensorFlow or Kerasmodel you’ve previously trained offline, you can convert into TensorFlow.js format, and load it into the browser for inference.
You can re-train an imported model. As in the Pac-Man demo above, you can use transfer learning to augment an existing model trained offline using a small amount of data collected in the browser using a technique called Image Retraining. This is one way to train an accurate model quickly, using only a small amount of data.
Author models directly in browser. You can also use TensorFlow.js to define, train, and run models entirely in the browser using Javascript and a high-level layers API. If you’re familiar with Keras, the high-level layers API should feel familiar.
If you like, you can head directly to the samples or tutorials to get started. These show how-to export a model defined in Python for inference in the browser, as well as how to define and train models entirely in Javascript. As a quick preview, here’s a snippet of code that defines a neural network to classify flowers, much like on the getting started guide on TensorFlow.org. Here, we’ll define a model using a stack of layers.
import * as tf from ‘@tensorflow/tfjs’;
const model = tf.sequential();
model.add(tf.layers.dense({inputShape: [4], units: 100}));
model.add(tf.layers.dense({units: 4}));
model.compile({loss: ‘categoricalCrossentropy’, optimizer: ‘sgd’});
The layers API we’re using here supports all of the Keras layers found in the examples directory (including Dense, CNN, LSTM, and so on). We can then train our model using the same Keras-compatible API with a method call:
The model is now ready to use to make predictions:
這個模型已經可以進行預測了:
// Get measurements for a new flower to generate a prediction
// The first argument is the data, and the second is the shape.
const inputData = tf.tensor2d([[4.8, 3.0, 1.4, 0.1]], [1, 4]);
// Get the highest confidence prediction from our model
const result = model.predict(inputData);
const winner = irisClasses[result.argMax().dataSync()[0]];
// Display the winner
console.log(winner);
TensorFlow.js also includes a low-level API (previously deeplearn.js) and support for Eager execution. You can learn more about these by watching the talk at the TensorFlow Developer Summit.
Good question! TensorFlow.js, an ecosystem of JavaScript tools for machine learning, is the successor to deeplearn.js which is now called TensorFlow.js Core. TensorFlow.js also includes a Layers API, which is a higher level library for building machine learning models that uses Core, as well as tools for automatically porting TensorFlow SavedModels and Keras hdf5 models. For answers to more questions like this, check out the FAQ.
To learn more about TensorFlow.js, visit the project homepage, check out the tutorials, and try the examples. You can also watch the talk from the 2018 TensorFlow Developer Summit, and follow TensorFlow on Twitter.
Thanks for reading, and we’re excited to see what you’ll create with TensorFlow.js! If you like, you can follow @dsmilkov, @nsthorat, and @sqcaifrom the TensorFlow.js team on Twitter for updates.
除本身能提供的服務和功能外,「智慧車」更能擴展應用領域。例如,結合智慧型運輸系統(Intelligent Transport Systems,ITS) ,以提升先進交通管理服務、先進旅行者資訊服務、先進公共運輸服務、先進車輛控制安全服務、商車營運服務、緊急事故支援服務、電子收付費服務、資訊管理系統、弱勢使用者保護服務等九大領域。
第三站是MIT Media Lab。這裡大概是一天行程中,所有學生印象最深刻的經歷!因為我們特別情商目前在MIT Media Lab的Biomechatronics Leg Lab內,全球知名生物機電跨域整合義肢實驗室計畫主持人Hugh Herr門下,來自台灣的優秀學生謝宗翰,分享他在實驗室內關於仿生義肢、輔具、生物力學、肌肉骨骼系統及神經科學的研究,並帶領所有師生入實驗室內參訪。更感人的是參訪後,謝宗翰分享他一路的求學經歷與人生觀。
Google的Deepmind團隊使用了Alpha GO 挑戰世界棋王獲勝的事,大家還記得嗎?(快速回憶AlphaGO-連結),這項成果該團隊使用的是神經網路運算技術,工具是Tensorflow。Gmail的垃圾郵件判讀、Google相簿臉部識別、Google翻譯,Google在Tensorflow上以Opensource的方式開放出來,大家可按照自己想做的AI案例收集樣本資料,訓練AI判斷的模型。
她是Lisa,SCARSDALE HIGH SCHOOL公立高中的STEAM協調教師,管理SHS Design Lab,教授藝術科目,執行過穿戴式裝置、Take it apart(玩具解構)、身障學生輔具等專案課程。在進一步了解她所屬的學校後,發現其爲全美公立高中各項評筆前25名的高校,並在本質上與板橋高中有非常多相似處。
開張兩年的Design Lab,是一個相當具規模的教學型Lab,明亮且開放的空間,由一位藝術 / 設計專長的教師Lisa,搭配另一位機電整合的技術專長教師Brian,共同教授專題課程,班級人數約16位。此行在教學上的最大收穫,應是Lisa老師分享Take it apart專案(拆解物件結構的作品) 的執行過程。其課程架構是依據Agency by design介紹的思考流程而設計:
Looking closely
Exploring complexity
Finding opportunity
在教學評量上,專案任務導向型的課程設計,最大困難往往在於評量的標準和依據,因爲很難以最終的創作成果衡量每位學生。因此,Lisa採用portfolio reflection notes等方式評量,依照哈佛大學Project Zero提出的Agency by Design(maker centered learning)教學規準進行評量。
尋覓高校參訪時,IBM的P-TECH(Pathways in Technology Early College High School,前期科技學院高中學校教育路徑)的教育模式,在眾多創新教育中,使我的目光停佇許久。該校是由IBM、紐約市立教育局、紐約市立大學,於2011年共同研發的高中6年學制。而就在今年,台灣分別有台北科大、虎尾科大、高雄應用科大正式導入此學制。
We are excited to introduce a new optimization toolkit in TensorFlow: a suite of techniques that developers, both novice and advanced, can use to optimize machine learning models for deployment and execution.
While we expect that these techniques will be useful for optimizing any TensorFlow model for deployment, they are particularly important for TensorFlow Lite developers who are serving models on devices with tight memory, power constraints, and storage limitations. If you haven’t tried out TensorFlow Lite yet, you can find out more about it here.
The first technique that we are adding support for is post-training quantization to the TensorFlow Lite conversion tool. This can result in up to 4x compression and up to 3x faster execution for relevant machine learning models.
By quantizing their models, developers will also gain the additional benefit of reduced power consumption. This can be useful for deployment in edge devices, beyond mobile phones.
藉由量化模型,開發者也將因電源功耗減少而得到額外益處。這有助於部署模型在非手機範圍的邊緣裝置上。
Enabling post-training quantization
The post-training quantization technique is integrated into the TensorFlow Lite conversion tool. Getting started is easy: after building their TensorFlow model, developers can simply enable the ‘post_training_quantize’ flag in the TensorFlow Lite conversion tool. Assuming that the saved model is stored in saved_model_dir, the quantized tflite flatbuffer can be generated:
Our tutorial walks you through how to do this in depth. In the future, we aim to incorporate this technique into general TensorFlow tooling as well, so that it can be used for deployment on platforms not currently supported by TensorFlow Lite.
Models, which consist primarily of convolutional layers, get 10–50% faster execution
RNN-based models get up to 3x speed-up
Due to reduced memory and computation requirements, we expect that most models will also have lower power consumption
「訓練後量化」的效益:
模型大小最高可減少4倍
主要由卷積層所組成的模型,執行速度可加快10%-50%。
以遞迴神經網路RNN為基礎的模型,最高可加速3倍。
由於減低記憶體和低運算的要求日增,我們期待大多數的模型也能做到低功耗。
See graphs below for model size reduction and execution time speed-ups for a few models (measurements done on Android Pixel 2 phone using a single core).
These speed-ups and model size reductions occur with little impact to accuracy. In general, models that are already small for the task at hand (for example, mobilenet v1 for image classification) may experience more accuracy loss. For many of these models we provide pre-trained fully-quantized models.
Under the hood, we are running optimizations (otherwise referred to as quantization) by lowering the precision of the parameters (i.e. neural network weights) from their training-time 32-bit floating-point representations into much smaller and efficient 8-bit integer ones. See the post-training quantization guide for more details.
These optimizations will make sure to pair the reduced-precision operation definitions in the resulting model with kernel implementations that use a mix of fixed- and floating-point math. This will execute the heaviest computations fast in lower precision, but the most sensitive ones with higher precision, thus typically resulting in little to no final accuracy losses for the task, yet a significant speed-up over pure floating-point execution. For operations where there isn’t a matching “hybrid” kernel, or where the Toolkit deems it necessary, it will reconvert the parameters to the higher floating point precision for execution. Please see the post-training quantization page for a list of supported hybrid operations.
We will continue to improve post-training quantization as well as work on other techniques which make it easier to optimize models. These will be integrated into relevant TensorFlow workflows to make them easy to use.
Post-training quantization is the first offering under the umbrella of the optimization toolkit that we are developing. We look forward to getting developer feedback on it.
AI 等資訊科技是現在進行式,今天弄得要死要活的東西,明天說不定點點按鈕就好了?近兩年物聯網教學就是很好的例證,使用LinkIt 7697搭配 MCS 雲服務,已經能讓國小學生也能做出簡單的物聯網專案,從網頁與手機就能監看感測器資訊或控制開發板。在此的並非說網路通訊協定不重要,而是對於非專業人士來說,這樣的設計能幫助他們聚焦在最重要的事情上:資料。如果資料對於開發者來說是有意義或是重要的,那先從資料本身開始是個相當好的出發點。
AI 對多數人來說,還是太虛無飄渺了。CAVEDU 為了讓學生理解 AI 諸多領域中最容易有感也是最容易實踐的:視覺辨識,我們使用 Raspberry Pi B3+ (後簡稱 Pi3)所設計的 「邊緣運算 AI 無人自駕小車」。
這是我們認為對於基礎 AI 視覺應用的最佳教學套件。之所以選用 Pi3 自然是因為其性價比以及豐富的教學資源,當年還是 Pi 2的時候就有相當不錯的 OpenCV 視覺追蹤效果,各樣的函式庫套件也非常豐富,一下子很多專題都可以使用它來完成,與Arduino 兩者號稱是學生專題救星呢(笑)!
AI 視覺應用的難點在於收集影像資料。喜歡養貓的阿吉老師開玩笑說:「我要幫我家的貓要拍多少張照片都沒問題,但是要蒐集十種不同的貓就難囉!」我們所設計的課程會帶學生體驗完整的訓練流程,不使用現成的資料集(因為訓練結果不會差太多),而是針對無人小車的場地實際收集影像,標記,最後選定模型來進行訓練。其中每一個環節都會影響到小車最終的辨識結果。一定有感!