Run the world from the palm of your hand by Jared Rhodes || Lightup Conference
8K views
Nov 16, 2023
In this session, we will look at how mobile devices can be used to interface with headless devices through wireless technology. With the strong adoption of Internet of Things devices, or devices without a direct interface, a common form of interaction is widely used. That interaction is through the computer most people carry with them every day, a mobile phone. After this presentation, attendees will have a strong understanding of how to use mobile devices in their Internet of Things story. Conference Website: https://www.2020twenty.net/lightup #lightup #2020twenty
View Video Transcript
0:00
OK, so this is the run the world from the palm of your hand talk. If you were looking for one of the other talks, just stay here
0:09
Alright, so if you want to go ahead, you can scan the QR code and donate in support of UNICEF
0:18
I'll give you just a second if you didn't have your mobile phones ready. Normally with an audience, I can actually look out into the crowd and see how many phones are pointed at the screen
0:28
So I'll just talk right now to kill time in case you didn't get it. All right, and then we'd also like to thank our sponsors
0:35
They're helping make this possible. Okay, so about me, my name is Jared Rhodes
0:42
I'm an MVP for Microsoft Azure, a Pluralsight author. And if you need to contact me, you can use my email, my GitHub, my Twitter handle
0:51
And pretty much follows this theme if you need to find me anywhere on social media
0:55
I'm the owner and operator of commodity technologies. We do coaching. So if you need training
1:03
onsite training, learning, we do consulting as well. So if you need sort of high level architecture
1:10
planning or anything of that sort, we can do consulting. And then finally creating
1:18
we can actually do implementations and build software. And if you'd like to check out my latest course on Pluralsight, it's creating
1:28
responsive layouts in Xamarin forms. I've got content on Pluralsight for Xamarin, Azure, and
1:35
machine learning. OK, so in this presentation, what we're going to do is first we're going to do a
1:44
brief overview of the um i don't know if we should call it the mobile landscape at this point of the
1:52
application development landscape landscape um for anything that's not a um i don't know just
1:59
anything that's not server development so we'll quickly very quickly talk about wi-fi and lte
2:05
then we'll go over bluetooth and then we'll look at nfc and then lidar sms uh the camera and the
2:12
microphone all for these different devices. So let's get started with our overview
2:19
So we'll break it down into three categories for current development and again I'm trying not to
2:25
say mobile development and it's the word that keeps popping into my head but we really mobile is now
2:32
the phone the laptop and more. So we'll break down into three categories that'll be native
2:39
cross platform and hybrid. So for native development you have, and in this case we'll talk about it
2:47
in the mobile context, you really have two choices. You're going to want to develop for iOS
2:53
so if you want to do that natively, you're going to be using objective Swift
2:58
or excuse me, objective C and or Swift. You'll be using Apple's Xcode
3:02
and you'll be using the iOS SDK. If you want to do native Android
3:07
You'll be using Java and or Kotlin. You'll be using Android Studio
3:11
and the Android developer tools, and you'll be using the Android SDK
3:17
If you want to do cross platform development, you have a few more options
3:21
So you can use React Native. So React Native allows you to use TypeScript
3:26
and JavaScript and you can use the tools of your choice to develop native applications
3:32
So React Native will compile down into a native application. It's not just a web view rendering an HTML page
3:38
This actually creates a native binary for your application. So you can use the tools of your choice for that
3:45
anything that supports TypeScript and JavaScript. You can do Xamarin or Xamarin Forms
3:51
which allows you to use C-sharp and F-sharp. And you can create applications in Visual Studio
3:58
Visual Studio Code, and Writer. If you are more familiar with C and C++, you can do a C++ application using C++ or any other
4:11
language that will allow you to create a linkable library, and you can use the tools of your choice
4:18
iOS supports C++ through Clang. Android has the IndyK, so you can use C++, and you can actually
4:27
to build a complete application without ever having to write a line of Java or a line of
4:33
Objective-C or Swift. It's just, you're going to have, you probably, I mean, most likely you'll have to write a
4:39
little bit. But you can, using those NDK or Clang, create an entire application with just C++
4:46
And then finally, when I first gave this talk, I actually did it and I did those three
4:51
And then someone stopped me after and they said, well, you forgot, you know, you can create cross platform applications in all the different game engines and they were right I completely
4:59
missed it but you can create applications for Android iOS Windows Linux using unity unreal and
5:08
other game engines that give you integration to create a UI in that way
5:16
and then we have hybrid applications which I've seen less and less and less and less When I say hybrid applications
5:22
that means some form of an application that is not running true native
5:28
It doesn't compile down to the native components. It actually runs partially or fully in a web view
5:34
control running an HTML page. So there's Cordova. So Cordova is what was that called? PhoneGap
5:43
AirGap? PhoneGap? So Cordova is the PhoneGap open source? I forget exactly what Cordova's relationship is to PhoneGap
5:52
I know it is PhoneGap. I'm just forgetting exactly if it's the open source version or not
5:59
But you can use that and you can use JavaScript and HTML to write a complete application
6:03
Just understand that it's hosted in a web view on the device
6:08
And there are plugins that you can use to reach outside of that web view and interact with the native components
6:14
And then there's Ionic. and you can use ionic and sort of the same way giving you that html javascript build experience
6:20
to run on the app run on the device okay so now we've had a high level overview of mobile development
6:29
let's move into talking about wi-fi and lte very quickly because one of the things i want to do
6:34
during this talk is to give you an idea of what technology to use when and then what's available
6:38
to you through these different devices and obviously wi-fi and lte are available almost
6:43
all these devices. So when do we use Wi-Fi and LTE? Well we use it almost all the time without
6:51
even thinking about it. So we use it because we want the high data rate. If we want to do
6:58
video streaming or we want to do large content, large amount, large, what am I trying to say
7:06
You get high data rate. You have to use a lot of bits and a lot of bytes. You're going to have to use Wi-Fi or LTE. These other protocols aren't going to support it as well
7:13
But the main reason you're going to use Wi-Fi and LTE is because it's common
7:18
I'm sure everyone listening on this call is most likely either on the Wi-Fi where they
7:23
are or they using LTE on their mobile device Most places probably aren even hardwired anymore at least for whoever listening listening And then finally it obviously you going to want to use it to connect to the Internet
7:35
If I go ahead and I'm using these other things that were listed that I showed earlier, whether I'm using SMS or Bluetooth
7:42
the Internet isn't a easy access point at that from that. From that wireless protocol
7:51
So if I want to access servers or just any sort of backend I'm developing over the internet, I'm going to default to Wi-Fi and LTE
7:59
And that's really what I want to say about Wi-Fi and LTE because it's so ubiquitous in our development
8:03
I don't think I can teach you anything about Wi-Fi or LTE. What I'd rather do is I'd rather move on to talking about the other wireless protocols that you have available to you on these devices that you may not regularly use
8:14
So one of those would be Bluetooth. Bluetooth is pretty universal on devices nowadays
8:22
In fact, a lot of the chip manufacturers that make the Wi-Fi chips just have Bluetooth capabilities built in
8:28
So some of the uses for Bluetooth, you already know. The original Bluetooth component was the earpiece
8:38
to the point that we all know the person walking by at the airport or walking around outside talking to themselves
8:44
because they had the Bluetooth earpiece in. It has other uses as well
8:48
You can obviously use it with the same sort of technology to listen to music
8:51
but it can also be used to transfer files. I believe with the iPhone and a Mac or an iPhone and any other Apple product
9:00
it behind the scenes default connects over Bluetooth when you want to do things like local file share
9:05
So you can easily transfer those files over. And then we have tethering or just because I'm from Georgia, I'll call it piggybacking
9:14
So tethering or piggybacking so that I can connect my phone to another device over Bluetooth and then let that device, which is Internet capable
9:25
Be my Internet connection or vice versa. A device can connect to my device and now my device, which has LTE or Wi-Fi, can act as the Internet for that device
9:36
And you just do a protocol translation of Bluetooth to Wi-Fi or LTE
9:42
and that way each device doesn't have to have its own connection they can
9:46
share a connection. We'll go into this a little bit more but you can use it in
9:52
advertising there's actually a component of the Bluetooth spec that is built
9:56
specifically for advertising and you've probably had it on your phone and you
10:00
just haven't noticed but advertising actually built into the spec is a major
10:04
component of Bluetooth and finally you can use it for location services
10:08
So along with the advertising spec, Bluetooth has built-in capabilities so that you can basically broadcast
10:20
So let's say you're in a store. Normally I do this with the room that I'm in
10:25
but let's say that you are in a big room like a theater where someone's presenting
10:32
So you're in a theater where someone's presenting and there is in the room that you're in
10:38
there's a device that's broadcasting bluetooth signal and then in the room next door there's a
10:43
device also broadcasting bluetooth signal what you can do is you can actually take the signal
10:47
strength of the different access points for bluetooth sending these advertising packets
10:53
and you can determine how far you are away from each one and that'll actually give you an indoor
10:58
location you use this uh commonly when you need indoor location on top of gps so i use gps for
11:05
your outdoor location but once you get inside gps doesn't work as well so i can actually use
11:09
bluetooth and bluetooth beacons to track your indoor movements and i say track it sounds really
11:14
bad but you'd be surprised if you ever turn on your your phone and look at your bluetooth utilization
11:19
for like the walgreens app you'll probably see that the bluetooth is on in the background and
11:24
it actually is using beacons to help uh collect data while you're in the store
11:29
Alright, so Bluetooth can be broken down into two major specs. There is the Bluetooth basic rate and enhanced data rate
11:42
This is what I'll call original Bluetooth. And if you want to program for Bluetooth or develop for Bluetooth basic rate or enhanced data rate
11:50
you can think of it as developing for a client to server application
11:56
where you're just working on open sockets, meaning that there is a component where I can send bytes
12:04
and I can receive bytes, and I can do this as a stream. We can have an open conversation
12:08
where you get bytes, I get bytes back, and everything built on top of that is going to be
12:12
your application determining exactly what each byte means and how to have a conversation with the other side
12:21
And you use basic rate or enhanced data rate when you want to do things like the music streaming like the the headphones because you want that open streaming communication
12:31
As a Bluetooth 4.0 they came out with Bluetooth low energy and Bluetooth low energy had two major
12:37
components. One was Bluetooth low energy, well it's called low energy for a reason. The low energy
12:43
portion means that the device can go to sleep. So when you create the Bluetooth connection to a
12:49
device they actually talk to one another and they determine how often are we actually going to send
12:54
information and that determines how often the device will go to sleep and by going to sleep
12:59
if you think about if you think about how your chip the chips in your computer actually work
13:03
once you power them on they're always going they're always moving but they also have this ability
13:08
think about it when you put your computer into standby mode it goes into a low power state and
13:14
bluetooth low energy is a spec built for that so that we start a conversation we determine how often
13:20
we're going to talk and then you determine or once we determine that you can now go to sleep when
13:25
we're not going to talk and you can enter that low power mode to give you an idea if you go into
13:30
low power mode let's say half the time so we decide we're going to talk and you're going to go to sleep
13:36
one second out of every two seconds then we can extend the life of the battery two times well if
13:42
we do that one second every 10 seconds is when we're going to talk well then we can extend the
13:46
battery life nine to ten times In Bluetooth Low Energy, we have three different topologies
13:56
So one is point to point. So point to point is, oh, you know what
14:00
I skipped it. Let me step back actually. I forgot one thing. I told you about Bluetooth Low Energy being Low Energy
14:04
I forgot the second really important part, which is now there is contract-based communication
14:12
So instead of just having raw bits and bytes that are being handed back and forth and you're determining everything
14:16
Bluetooth Low Energy actually has the idea of a contract. And so you can create a contract
14:21
and we'll go into that a little bit more later so that now instead of building like you're creating on top of a raw socket
14:27
you can think of it more as building an application like you're building an HTTP API
14:32
so that you can send this command response style of communication. Okay, so now moving into the topologies
14:40
We have three topologies for Bluetooth. So one is point to point
14:43
For point to point, you can guess, right? We have a client, we have a server, we connect
14:49
And that is point to point. Now we can communicate directly. Broadcast
14:54
So broadcast is what I was talking about earlier with the Bluetooth beat or the Bluetooth advertising So now you can create a Bluetooth device put it in somewhere
15:05
and all it does is it broadcast a set. A set of bytes, I always call it bytes, a set of packets that
15:13
now anyone roaming and listening for Bluetooth broadcast can hear. So it has a very specific format of this is me telling you
15:22
who I am, this is me telling you what company I am, and then here's the actual broadcast that I'm
15:28
trying to send you. And then as the receiver, you can retrieve that broadcast and get the information
15:34
out of it. Because once we've established that first part of me telling you who I am, what company
15:39
I am, the rest of that packet can change. And this is used all the time for various different things
15:44
We already talked about the Bluetooth beacons for indoor GPS. But when this was first used, it was
15:50
I shouldn't say when it was first used, but one of the original uses I think was the
15:55
World Cup in Brazil. And Coca-Cola was using it so that when you get close to the Coca-Cola
16:01
booth, it would just be sending out a broadcast and a little notification would pop up on your phone
16:06
saying, come get a free Coke over here at the Coca-Cola booth. And companies can use this for
16:13
all kinds of different things to broadcast. We can send information to you that we can use within
16:18
our application that you don't need directly at all times but if there's something important i
16:23
can show you a notification and i can do this advertising through the broadcast spec or the
16:28
broadcast topology and then the lesser used mesh topology mesh topology is if we go back to the
16:35
scenario where you're in a room and there's a bluetooth device that you can connect to and then
16:40
in the next room there's a bluetooth device let's keep that going let's say there's another device
16:44
device and another device and at the end of all those devices that are way away from you
16:50
The last device can connect to the Internet. What the mesh spec will allow for is if I can connect to the one that's local to me and
16:57
the mesh spec has all the different ones routed out to the Internet interconnected in the mesh
17:02
topology, it will automatically route to the one that I'm asking for over by to get the Internet
17:11
So I connect to the local one. I'm making a request to the one that's over too far
17:16
for me to connect directly to. The mesh back will actually just figure out, okay, who's all in my network
17:20
And then it'll route you through all the different ones that needs to to finally get you to where your request was sent
17:26
and send you a response. So I'm going to start showing you some code
17:32
This is Xamarin code, so it's C sharp. So if you're a C sharp developer, it'll look familiar
17:38
For each of these that I'm showing you, this is actually just Xamarin's way of facilitating
17:46
the native Java or Swift in C Sharp. So that it's C Sharp Lite
17:55
These will look different if you're actually programming them in the different languages I talked about
17:59
But these are the C Sharp Xamarin code snippets. So let's look at connecting to a Bluetooth device
18:05
It's actually as easy as this. say we have a device list and every time we connect to a device we just want to add that device
18:12
well once we have our bluetooth adapter which is a global variable we just do adapter
18:19
scan for devices and when it discovers one we can add it to a list of devices
18:24
okay so we talked about the bluetooth low energy spec and i want to go into that a little bit more
18:30
because that actually allows you to connect easily and easily create devices for your
18:36
applications or clients for the app clients for the devices for your applications so there's the
18:42
gat server so the generic attribute profile the gap is built on top of the attribute protocol act
18:50
and establishes common operations and a framework for the data transported and stored by the attribute
18:55
protocol okay that doesn't make any sense let me break it down for you so that it makes some sense
19:00
So the GAT server can be broken up into four major categories and they're hierarchical
19:07
meaning at the top level, there's a profile and a profile is pretty straightforward
19:14
If you were to, let's take open API as an example to relate to. So if you want to develop for an API
19:22
and it has an open API spec, you can go, you can grab that open API spec and now you know
19:27
of the contract that's set up for you to use it tells you all the different methods and whatever
19:32
you can use for this api the profile is the same way except the profile is at the device level
19:41
so i'm connecting to this device and this device profile tells me all the different things it can
19:49
do except it's broken down a little bit differently it actually tells me all of its services so i go
19:57
I ask the device. I say, hey device, you support Bluetooth low energy because we connected. It says
20:02
yes, I do. Here's my profile. I go, OK, that's a cool looking profile. What services are in this
20:07
profile? Well, within the services, it tells me a set of characteristics. In the open API world
20:16
we can think of it as the service is actually what relates to the open API spec. The profile
20:21
would be more of looking at a set of APIs that you need to integrate with. The service is the
20:28
individual API and the characteristic would be the rest component of it. So let's say that there was
20:36
a person and I wanted to be able to get and put and post or create, delete that person
20:44
Well, characteristic would be equivalent to that. The characteristic is that entity that I want to
20:49
operate on and i'm sort of using if there's any people who do bluetooth i i'm i'm gearing this
20:56
more towards developers um i know that the characteristics aren't entities but you get
21:01
the idea as a developer you would think of it more as here's the entity that i want to operate on
21:05
and that's the characteristic now within that characteristic i can look at different descriptors
21:11
properties and the value of that characteristic okay probably so doesn't make much sense so let's
21:17
let's break down a little bit further. So we have the profile. That's what the device just gave me
21:21
and it said this is all the things that I can do and all things are a list of services
21:29
Within that list of services, each of those services could have different characteristics
21:34
In each of those characteristics, I can then now ask for the different properties
21:39
the values, and the descriptors of that characteristic. So when I actually programming
21:49
I probably already know the services that are going to be available to
21:54
me when I connect to a profile. Because think about it when you
21:58
program an open API against an API. You don't query the API and dynamically
22:03
figure out what am I going to be. No, probably before that you
22:07
generated a whole bunch of code. You decided which endpoints you were
22:11
going to interact with for your application. And you didn't really care after that
22:15
when you connected. What you really did was you said, OK, well, there's a characteristic of a
22:20
person so let me connect and I want to talk to that characteristic So I want to create a new person so I send a message to that characteristic And it creates a response and sends it back
22:37
So let's take a look at a predefined. So the Bluetooth spec
22:41
the Bluetooth low energy spec has pre-built services that you can look at the definition of
22:47
And that way, if you actually want to create a application that can connect
22:51
two different Bluetooth devices that already have predefined services, they're common. You know what they would look like
22:58
So the heart rate service is known. So it actually has a unique identifier
23:02
When you pull a service in, you can check against that unique identifier. And if it is the heart rate service
23:07
you can already have a predefined way to interact with it. Well, let's take a look at the heart rate service
23:13
It's gonna have two characteristics and a descriptor in the service. And the characteristics, if I'm remembering correctly
23:21
are the beats per minute. And then there's another characteristic that you use to
23:27
what was the other characteristic? I'm forgetting off the top of my head, but let's just worry about that beats per minute characteristic
23:34
So that beat per minute character, oh, I think the other one is, I think the other one is so that I can set alerts
23:41
So I've got beats per minute and that is a read field
23:46
I can read beats per minute. I can't write to beats per minute
23:50
I can only read, right? Because it's a sensor, reading values would make no sense to write to it but then there's another characteristic i believe
23:56
that i should be able to write to so that i can see i can set alerts i can say if beats per minute
24:02
goes above x or y now i want to receive alerts normally i would ask are there any questions
24:11
but uh if there are i don't see any so we're moving on this is some code for once we've got
24:18
that device you remember we made that device list earlier now let's interact with the devices
24:23
that we've talked about so if i want to get the list of services from the profile after i've
24:27
connected to the device i just say get the services async and if i want to get the characteristics
24:33
from that service i just call a get characteristics async and if i want to read the characteristics
24:38
it'll send me the byte array and i just say read that characteristic if i want to write to the
24:43
the characteristic I just write and I send to the bytes. If I want to receive alerts for something
24:49
like the heart rate monitor, I can actually subscribe to the update events by just giving the
24:57
events a callback and the calling start updates async. If I want to get the descriptors, you can
25:02
guess. I just call get descriptors async. If I want to read that descriptor, I can just read it
25:08
If I want to write to it, I can just write to it. Seems pretty simple, right
25:13
We can talk a little bit about the physical layer because Bluetooth 5 added stuff to the physical layer that's kind of important
25:21
not truly important to most people in this talk, but it will cover it anyways just in case it comes up
25:27
So if we look at the Bluetooth physical layer, you'll see how apps
25:32
you know, we've got apps up here at the top and then it's highlighted and it says apps and then below that's host
25:37
And then the first thing in host is the generic access profile
25:41
As an application developer, you're done. You're probably not going to go anywhere below the generic access profile
25:49
The physical layer, the link layer, the direct test mode, host controller
25:53
logical link, attribute protocol, security manager. If you're building hardware, you're still most likely going to have all that provided to you by
26:02
the hardware manufacturer of the Bluetooth chip itself. However, if you are creating Bluetooth chips
26:09
then the rest of those layers are important to you. Changes in Bluetooth 5 to the physical layer
26:15
So they added three new physical layers. So what we've done is we've renamed Bluetooth Low Energy
26:22
The original spec is now called, the original physical layer is called LE1M
26:27
So all Bluetooth 5 devices have to have one or LE1M, it's required for all devices
26:34
Meaning that all Bluetooth 5 devices are Bluetooth 4 compatible by default
26:40
On top of LE and LE1M, we now have LE2M. So now when the devices start a conversation
26:48
they can actually swap out of their original mode of 1M and they can swap to a new physical layer, 2M
26:55
So now they can actually have a higher data rate between the two of them
26:59
There's also LE coded, so we can do that for longer range. If we start a conversation, we say
27:03
hey, I'm gonna walk away. We can change physical modes and now we can have a long range conversation
27:10
So you have an idea of the changes. There's two LE coded for distance
27:15
so you can get two times as far, four times as far in your data rate, half about each time
27:21
Now you're still looking at 125 kilobits per second, which is better than Comcast gives me sometimes
27:26
So it's still not a bad data rate and you get a decently large range out of your device
27:33
And then for LE2M, you get double the speed, but a little bit less range
27:39
and they're all optional, and there's different error correction. Again, not that important to application developers
27:46
but in case you run into a Bluetooth project and all of a sudden you need to learn about Bluetooth 5
27:52
these are the primary differences. Okay, so we talked at length about Bluetooth
27:58
because as a spec, it is a lot of devices supported now
28:03
Your car supports it, a lot of devices in your house. Let's talk about some of the lesser used
28:08
wireless capabilities available to you on your device that you can still utilize
28:14
So let's talk this time about near field communication. So near field communication is a form
28:18
of contactless communication between devices like smartphones or tablets. So that contactless communication allows you
28:25
to wave your smartphone over the NSE compatible device, and you can send information without needing to go
28:31
through multiple steps or setting up a connection. Okay, so for near field communication
28:36
talk about the uses and then i want to talk about the the nfc standard and this will go a lot faster
28:41
than bluetooth so if you're in for a long break bluetooth was the one we covered the most okay
28:45
so nfc is useful for let's say you want to connect two devices right nfc allows you to get two
28:50
devices close together and now we can swap over to a different protocol that gives us longer range
28:55
and you see devices doing this all the time with sort of a tap to connect and then they use a
29:00
different communication protocol after that and you can use it for different things too if you
29:06
want to let's say you're installing water meters or an appliance you can just go ahead install it
29:11
and then have the person who's using your device just open the application tap to whatever that
29:18
that installation was and if that device is nfc and you've programmed it correctly
29:23
you can use the phone's um the phone's internet connections now register that in your service so
29:28
we do this in the field for let's take example of a water meter what i want to be able to do is once
29:33
the installer is done installing the water meter they now open the application they tap it i grab
29:39
their user id that they're logged in with and i grab the geolocation and the timestamp and we log
29:45
all of that and then we start provisioning the device and our back-end services to connect
29:49
over the nfc so that way when the installer's done all they do is tap the phone and that allows the
29:54
initial communication to start all the different provisioning in the background And you can do it if you want to
30:01
like you would Bluetooth, right? If you want to, there's a good one here
30:06
disabling your residential alarm. I used to have something set up on my computer
30:10
my tower, the same right next to me. I could just bring my phone over the NFC chip
30:14
and it would turn on the computer without me having to press the button
30:18
It was a neat project, but I realized I could just press the button after a while
30:23
OK, so this NFC standard we can break down into two major components, tags and modes
30:30
So for tags, there are four different tag types and we're going to see if I can remember these off the top of my head
30:36
because I didn't put them in my notes. I'm used to doing this in person. So the first two are type 1 and type 2. Small and extra
30:44
small. I believe the difference in small and extra small is small is 128 bytes and extra small is 64 bytes
30:55
That's the total amount of information you can pack onto I can put a GUID on an extra small and it's fine
31:02
So if I'm doing device provisioning, I can get a GUID, bring it to the card
31:07
put that right to the card, put that card into the device
31:12
and now whenever I scan, the unique identifier for that device is readily available
31:16
For small and extra small in some cases, I can actually fit a whole URL in there
31:21
so that if I need even more information, I can keep that information dynamic
31:26
but I can use the URL and I can that's unique for that tag to go get
31:33
And the reason there's a type one small and type two extra small, and there's such a little difference in it's so small in bytes
31:39
just has to do with price. If you want to manufacture extra small type twos
31:44
you can create, I mean, if you save a dollar going to extra small from small, usually when you're provisioning NFC
31:52
tags, and you type it into millions. So if we can save a dollar just going extra small
31:57
we saved a million dollars. There is type three, which is Felicia and if memory serves Felicia just has to do with the Japanese card reader technology. So in Japan they have a specific spec for card reader technology and Felicia is the tag type that is compliant with Japanese card readers. So it just exists for compliance reasons
32:20
And then finally type four is special. Type four is locked and that is actually meant for rewrite for an actual communication and we'll get into that in just a second
32:29
That takes us into the two different types of modes. Now the mode we've talked about up to this point has been reader, writer and card mode. So reader, writer and card mode is exactly what you would think. You can see here we've got the tap to pay graphic, which is reader and writer. One is the reader, one is the writer, and they communicate in one burst of information
32:49
But if you want to get really fancy, and you can, you can actually go into initiator and target mode
32:55
And what initiator and target mode does is, let's say we start in this reader, writer, and card mode
32:59
where one reads and the other one writes. Well, what if we just flipped it after we had our first read and write
33:07
And now we're communicating the other way. So you can actually create two NFC devices to have a full conversation
33:13
and it's actually a surprisingly high-throughput conversation if you need to transfer data or have a full conversation
33:19
full conversation so that two devices can connect to one another and then rotate NFC mode and actually
33:25
have a full conversation. Last one when I made this on my last update I didn't update it for
33:33
this talk so there may be better Xamarin support for NFC but right now you need to do it
33:41
to where you expose the NFC on each platform. So let's say we're doing it on android if I want to
33:47
turn my device into a NFC beacon on Android, I could just go through, I create an in-depth record
33:55
I figure out what kind of mine type and what kind of payload I want to put on there
33:59
and then I just create that and it's a byte array embedded into an in-depth record
34:06
And then whenever I want to do it, so if I'm creating that in-depth message
34:10
you can see it right here, I'm just going to grab whatever text I want, I create that indie
34:15
message with an indie ref in-depth record array and then I create my mind record the previous
34:23
slide for each message within for each record within the message so there's a message there's
34:30
an array of records and within that I'm looking at each message so for the longest time NFC wasn't
34:40
available on iOS. It was actually on the device. There was an NFC chip and Apple used it internally
34:45
for Apple Pay. And for the longest time, they just never exposed it through the SDK
34:50
Now it's available since, I don't know, 13, 11, something. It's been available for, I guess
34:56
in mobile terms a while, but still pretty recent. And if you want to do it on iOS, it's always
35:03
it's always a simple you just in a nfc in-depth reader session and you call begin session you'll
35:11
see this screen ready to scan and then whatever you put in the info p list for the object name
35:18
and then you have callbacks so did it detect and then if it did you'll get a list of messages you
35:23
can grab those messages and if it did not you'll get an um an invalid message error and you can
35:29
handle that as well. Again, this is the point where I would ask are there any questions and I don't see any in chat, so moving on. This is the one I actually added for today's talk. This is the update. I should have updated everything else, but this is the update I made for today. So we're going to talk about LiDAR. So LiDAR is a method for measuring distance by illuminating the target with a laser light and measuring the reflection with a sensor. So the difference in the return times and the wavelength can be used to make a digital 3D representation of the target
35:59
What does that mean? Okay, so let's say that we have a blue square, which is our LIDAR sensor
36:08
And so we go ahead and we want to scan the area around us using LIDAR
36:12
So we send it out and then it hits an object. So the laser returned quicker and then it hit another object, it returned quicker
36:22
So what happens is, is actually on our end, as LIDAR detects, we're going to start seeing these different
36:28
obscurements in our local area. So we're using anything about it. It's like our its radar
36:34
It's actually figuring out where objects are in the 3D space. Why you probably like Jerry
36:40
Why are you telling me about LIDAR? That's neat, but we talking about devices here Also recently Apple announced that the new iPad Pro is going to have a LiDAR detector built into or LiDAR sensor built into it
36:56
Currently, there's no plans to expose LiDAR directly through the SDK. But the purpose of the LiDAR detector, it can have it can
37:04
have plenty of use cases. So let's go over the different use cases you can have for LiDAR. This is general use cases, so
37:12
can have agricultural robots they can use lidar for a variety of purposes for ranging so if it's
37:19
trying to disperse seeds or fertilizer it uses sensing techniques as well as just crop scouting
37:26
and looking for weeds all kinds of stuff are done in agriculture with lidar autonomous vehicles so
37:36
for most autonomous vehicle research going on right now and even in the i don't know i think
37:41
and the Tesla and a few others, they have LIDAR sensors built in so that the system can get a 3D
37:47
measure of what's going on around it to determine or to help it make decisions, right? You want to know
37:52
where the distance is between you and other things real time, not just by trying
37:57
to look at it on a camera, but getting this LIDAR information as well. There's military applications
38:04
applications. So there's few that are public. There's a lidar based speed measurement of the AGM
38:13
129 ACM stealth nuclear cruise missile. But there's a considerable amount of research underway in use
38:22
for them, so they use it for higher resolution systems to collect enough data to identify targets like tanks
38:28
So the military applications include the airborne laser mine detection system for counter mine warfare and a few of that there's a few other different
38:37
things that LIDAR is used for that is publicly known and a lot of different stuff that is
38:42
still classified. So airborne LIDAR sensors are used by companies for remote field sensing
38:50
So survey. So one of the projects I had heard about was we were working in a facility for emergency
38:59
first responders. And in that facility, they actually could blow up a building and then train on that blown up building
39:06
So one of the projects that was going on there was actually using LIDAR attached to drones
39:11
And what you were trying to do was you were trying to give the first responders information about the building layout as it had collapsed
39:19
So using LIDAR to sort of get a feeling of the terrain and giving that information to first responders when they couldn't get it elsewhere
39:26
It's used heavily in atmospheric research and atmosphere sensing and it's very different in that case
39:32
So we won't go into it too much, but just understand that once you send the laser into the atmosphere
39:36
it disperses in certain patterns and you can actually get information. The big one for the iPad Pro is augmented reality
39:43
Because once you're doing augmented reality and you're building those 3D meshes around you and in the space that you're going to be
39:50
I don't know, I don't want to call it playing because it's mainly for games, mainly for games, but in that space that you're going to be working in, augmented reality can be
39:59
enhanced by having that LiDAR sensor help build out the 3D world that you're going to be operating
40:05
in so that when you take the classic example is in Unity or Unreal, you just take a sphere and
40:11
you give it physical properties and then you start the game and it'll just drop and it'll roll around
40:16
in the world that you're in, that you're looking at, and it understands where the slope is and
40:21
understands where that is and that can all be enhanced on the iPad Pro with LiDAR and I think
40:26
you'll see that in more and more applications or more and more devices going forward because of
40:30
that application. All right I'm probably going to start talking faster because I'm looking at the
40:34
clock and I want to get everything in but if you have any questions just type them in chat
40:40
Let's talk about SMS we all know what SMS is on our phone but what we primarily use it for
40:46
is texting we can send text to each other or we can send multimedia messages right we can send
40:52
gifs and videos and whatever but uh what's often not used but it's available within the spec
40:59
is the binary mode and you can send binary messages um if you're working at the device
41:06
level as opposed to more of the application level so sms is the short message service
41:13
So as it's used on modern devices, it originated from radio, from radios and radio memo pagers
41:21
that use standardized phone protocols. And these were defined in 1985 as part of the Global Systems for Mobile Communication
41:28
Series of Standards. And the first message was sent in 92. So it predates probably some of you on this call
41:36
So for text, there's a couple of different use cases you can get out of SMS
41:39
can create a chatbot so you can have it to where someone can just text the number it hits your
41:45
back-end services and then your back-end services can interpret what that message means and you have
41:50
a full conversation that way without even having to make an app a mobile app you can just have the
41:55
SMS endpoint be your entire interface for your user or just a component of your interface
42:01
you can do it for notifications so I don't know I get the notifications from the Walmart pharmacy
42:07
hey your prescription is ready and then it just shows up on my phone and and that's what a lot
42:11
of companies use um and i'm sure we all use it too for the would you like to sign in for your
42:18
two-factor here's your code and then you can just send data raw data packaged up in a way and since
42:24
the device can read the sms message you can actually look at the text maybe it's a url parse the url
42:31
and react to it. So if we're using Xamarin and we're on a device, this is how simple it is
42:38
send an SMS message. We just create an SMS message, we do compose a sync with that message
42:46
after setting the recipients and we're done. This will actually just open the native SMS
42:51
application on your phone with the message populated. If you want to receive it on the
42:57
the back end, I'm actually unintentionally wearing the Twilio shirt, but you can use Twilio and it's
43:01
actually really easy to set up a Twilio SMS ASP net controller that will listen to the incoming
43:07
messages for the incoming messages and you can easily respond to it. As you can see from this
43:14
code, I just create the post endpoint, I create a Twilio response with a message of hello from and
43:22
and then you said this and I just return it and Twilio handles all that behind the scenes
43:28
So you can also on top of SMS you can send multimedia
43:33
which we all used to So the multimedia messaging service is actually a different spec on top of SMS So it extends the core and it allows the exchange of text messages greater than the 160 characters that you limited to on SMS
43:46
So what can you do with MMS? Well you can send about 40 seconds of video. You can send an image
43:52
You can send a slideshow, probably one smaller than the one you're watching right now, or you can send audio
43:57
So the SMS specification has two modes in which a GSM or GPRS modem or mobile phone can operate
44:09
And they are called SMS text mode and SMS PDU mode for protocol data unit. And protocol data unit
44:16
is how we're going to be able to send just a packet of bytes one way so that we can just do a sort of
44:24
binary communication. I don't know where I was going with that. Yeah, so that way we can do a
44:29
binary communication. So if we have it, what we do in the field sometimes is we'll have devices that
44:35
go farther out than the LTE is covered, and we can actually swap over to PDU mode for SMS
44:41
and we can send out binary messages over SMS, capture them on the back end through Twilio
44:48
and that way we can continue our conversation with the device even once it goes far out
44:52
way far out into the field. And that is sometimes for compliance reasons too
44:56
Let's say that you're logistics hardware and you're going across the US border
45:01
Federal law, we have to track that. So we have to keep in constant communication
45:07
I just to give you an idea for, I should have updated this too. This is when I first made this slide
45:12
this is what it cost to send messages through Twilio. And you can see the idea, you get the idea, right
45:17
I mean, 0.0075 cents for a message and then .02 to send
45:24
It's amazingly cheap, and for you to actually run up a noticeable bill
45:29
you really gotta send a lot of messages. Okay, so that's SMS in a nutshell
45:35
We've still got the camera and the microphone, so we'll go through those really quickly so that I can get to the questions at the end
45:41
Okay, so the camera, let's cover five things. We're gonna cover object recognition, computer vision
45:47
barcodes, optical character recognition, and facial recognition. Some of these overlap, but you get the general idea
45:54
These are all use cases that we have for the camera on these devices
46:00
So for image processing, you can think of it, or at least I think of it in four different ways
46:05
So one would be formatted images, which we'll go over in the next slide
46:09
There are algorithmic ways to take a look at images. There are data-driven ways to look at images
46:17
or process images. And then finally, you can combine those algorithmic and data driven and have a multi factor approach
46:27
So formatted images, you've already seen them in this presentation, right? Barcodes and QR codes
46:34
I was doing work for National Cash Register and if you go up
46:39
and use the NCR checkout self checkout, you can just use the
46:43
mobile app for most of these stores and it'll present you with a QR code. You just scan that QR code and you check out
46:49
Right, so I can actually embed a large amount of information. I've actually got one of my side projects. These are defunct eyes on view cameras and the way they connect when you kick them off is they use the camera built into them and your phone actually creates the QR code and it reads the QR code off of your phone so that it knows how to connect to your Wi-Fi and provision itself
47:13
So you can use formatted images both to send messages to your device, or you can create a QR code to send messages to another device as a way of sending that dynamic information
47:26
Sort of the same way the NFC can do reader writer, right? We can have a full conversation with barcodes if we wanted to get really fancy
47:34
Barcode scanning in Xamarin is a pretty simple and robust at this point
47:39
If you use the ZXing, ZXing, I don't know how exactly pronounce it, but ZXING NUGA package
47:49
you just, there's a barcode scan, click event, so you grab the barcode scanner object, you got a
47:54
click event, and then you just, hey, give me a scan and tell me what the results of the text
48:00
coming out of that barcode scan is. Simple as that. So I didn't update my QR codes. So last time I
48:08
this talk that was actually a qr code with the the url of the talk of the conference i was at
48:14
and then here's one for my plural site course i can just you just scan those barcode scanner
48:19
it takes you straight to that website for algorithmic image processing we've got a couple
48:24
of different things i'm gonna go through them very quick these are full topics you can you can have
48:29
experts that have lifetime experience with them i don't have that i'll go through them really quick
48:33
so that we can stay on time so you can use histograms histograms give you a general idea
48:38
of color gradients so if I take a look at an image and it is green let's say take a picture of a
48:43
forest I can use a histogram of that to compare to other images this works for quick matching right
48:49
if I have pictures that have nowhere near the same color gradient it's an automatic rejection
48:53
I know these pictures aren't similar however it can sometimes see a forest a green forest as a pile
48:59
of money and it just wouldn't be able to tell the difference so it's for that quick gradient check
49:04
and you can filter out images quickly. You can do template matching which is sort of a way to do
49:12
let's say we take a large picture of the Mona Lisa and a small picture of the Mona Lisa. If I want
49:17
to find the large picture of the Mona Lisa and I have the picture of the small, I can use template
49:22
matching to basically look for that exact picture but maybe not the same size. So template matching
49:30
will be able to extrapolate say hey that's the same picture if we expand it out and it's the exact
49:36
replica of this picture and then we have feature matching feature matching is a much more in-depth
49:42
topic but you can do something like a key point ysis which is using different algorithms that
49:47
can look at an image and determine what are the and we'll put it in quotes important parts of this
49:51
image and you can take two images take those important parts of those images and compare the
49:56
distance between the important parts and determine how closely related these images are and that's
50:01
a good way to take it if you're looking for images of things that just are not in the same rotation
50:06
orientation or maybe not even fully similar but look similar for data driven you interrupt you
50:16
jared i want to interrupt you and let you know what uh we're 10 till the hour and we do have one
50:20
question oh i do where are we at What your opinion on that net Maui anything will dominate cross platform You OK so the question is what your opinion about that
50:34
Maui and do you think it will dominate the cross platform UI market? As of right now, I don't want to shoot myself in the
50:40
foot, but it seems like Maui is a rebranding of forms as they recreate
50:48
As they recreate it to sort of take away some of the stuff they
50:51
don't like anymore. I mean, Xamarin. Forms is now how old is it
50:56
8? 7? 8 years? It's old enough six years. It's old enough to where it's generated
51:01
enough or the market and the libraries have changed enough and C Sharp has
51:06
changed that I think they're just sort of doing it to get a fresh store in the
51:09
same way ASP net core was. So I think Maui will dominate sort of
51:14
If you think basically, I was just going to be forms again
51:18
However you think forms is doing is how how MAUI will do. For me, Forms has been great. I've made tons of apps. They're in the store
51:24
They do well. They're performant. I think MAUI will just be a more performant way to do that
51:28
once they hammer everything out. Okay. So I'll be wrapping up shortly. I've got this, the microphone
51:36
and then put your questions in and I'll keep a lookout for them. Okay. So for the data-driven
51:41
approaches, you're thinking about the machine learning and the AI-driven approaches. So just
51:47
go through this real quick you can actually still use opencv so opencv is still available and it
51:54
actually has a bunch of the different stuff that we a lot of people think of tensorflow or or any of
51:59
the other machine learning pytorch opencv has a lot of stuff built in that is in tensorflow and in
52:04
pytorch available to it you can actually do some pretty uh intense machine learning for images
52:10
in it and there's also as i said tensorflow core ml so i put tensorflow in core ml specifically
52:16
because tensorflow for android when you want to run your model and then core ml for ios
52:21
i'm going to skip through the next slide because i'm not irrelevant it's um cognitive services and
52:25
they're real easy to do computer vision with and facial recognition and stuff like that you can
52:32
easily just send it a file and it gives you a list of features back that it thinks that file has
52:38
so the microphone all right microphone you can really think of it in and for two separate things
52:45
One would be audio detection and language processing. So for audio detection, what we're doing is we're, we're
52:52
it's a process that we can infer if a sound or sounds is found within a set of sounds
52:57
So what does that mean? So there are these audio detection devices on the
53:05
stoplights in places like Chicago, and those devices can actually be used
53:11
use they detect gunshots and if you have multiple of them you can detect the center point for those
53:18
gunshots going off and it helps for response times. You can also use audio detection let's say we're on
53:23
a factory floor so if you're on a factory floor you can use audio detection to sort of determine
53:29
is this person in a place they're not supposed to be is this a dangerous piece of equipment and we've
53:34
got the audio profile for it and have they wandered too closely to that when they're supposed to be
53:39
outside of it and I can't track you any other way because of all of the other and we'll say noise
53:44
but I mean more wireless noise on the factory floor. The other way the microphone can be used is
53:51
language. So we all remember back when Siri, OK Google, Cortana were supposed to be the
53:59
I mean the life-changing things that they were supposed to be and we still are supposed to have
54:04
that with the Alexa so you can use it for assistance right just let me let me set up set a meeting in my calendar
54:15
intent recognition um and sort of the same way you can do the chatbot as we talked about earlier
54:20
now we're just instead of text we're using direct audio if you're using cognitive services with
54:25
lewis you can use the audio stream directly you don't even have to worry about parsing out the text
54:30
text. In your own code or back end service, so you just tell it
54:35
you know, speaking to the phone and on the back end it will do
54:38
the intent and it's the recognition for you. And then finally search I want pizza
54:44
Where are they serving pizza nearby? This is an example of a Lewis request, so if I want to send
54:53
in that request to Lewis, I'm showing you the text version just because it's easier than showing you the
55:00
it just makes more sense than showing you the audio version because the audio version is just an array of bytes
55:04
So for this, you want a prediction request. So turn on the bedroom light
55:08
I send that up to Lewis. This is how I call it
55:13
I say, get the slot prediction. I give it my app ID
55:17
I tell it the slot name, production, and then I give it my request. The response it gives me will look something like this
55:25
I just say get the prediction async. It'll give me what my query was
55:29
and it'll tell me all my different intents, ranking them by the probability
55:33
that that was the intent I was looking for. And then as a helper, it'll just tell me which was the top intent
55:38
so that I can just quickly say what do I think they were looking for
55:41
and create an action based on that. Okay, so what did we talk about
55:47
Well, we went over, what do we talk about? We did a brief overview of mobile development
55:51
or device development. I keep going mobile device development. And then we looked at quickly Wi-Fi and LTE, not quickly, Bluetooth, and then NFC, LIDAR, SMS, the camera, and the microphone
56:07
So I'm going to go back and see if there are any more questions. Nope, just .NET Mac. All right
56:16
This is the QR code for your speaker feedback and your event feedback
56:20
So if you would like to now, go ahead and scan those. Remember if you stand the speaker feedback
56:25
it's only if it's out of five stars, it's only five stars, right
56:29
In fact, if you didn't like the topic, don't give any feedback
56:33
And then you can also do the event feedback as well. So I was just killing times and giving you time to scan it
56:38
Hopefully you scan it by now. And that's it. I'm Jared Rhodes
56:43
and this was a talk about controlling or running the world from the palm of your hand
56:48
Thank you for coming
#Intelligent Personal Assistants
#Mobile & Wireless
#Mobile Apps & Add-Ons
#Mobile Phones
#Operating Systems
#Programming
#Software
#Voice & Video Chat