Polling vs WebSockets vs Server Sent Events (SSE) | Rubber Duck Dev Show 108

Episode 108 November 10, 2023 00:39:12
Polling vs WebSockets vs Server Sent Events (SSE) | Rubber Duck Dev Show 108
Rubber Duck Dev Show
Polling vs WebSockets vs Server Sent Events (SSE) | Rubber Duck Dev Show 108

Nov 10 2023 | 00:39:12

/

Hosted By

Creston Jamison

Show Notes

In this episode of the Rubber Duck Dev Show, we discuss the benefits and disadvantages of polling, WebSockets and server sent events (SSE).

To get the show notes for this episode, visit:

https://www.rubberduckdevshow.com/episodes/108-polling-vs-websockets-vs-server-sent-events/

 

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: Hello and welcome to the Rubber Duck Dev Show. I'm Chris. [00:00:03] Speaker B: And I'm Kristen. [00:00:04] Speaker A: And this is the second time we're recording this show because mouse problems. Anyway, today we're going to talk about websockets versus polling versus server sent events versus long polling, and all the ways that things can talk to other things. But before we get into that, how was your week, Bill? [00:00:28] Speaker B: Super busy, although I'll make some announcements. The first one is that I have created a mini course and I'm calling it the PostgreSQL Performance starter kit. Cool. So this is a series of three videos and it's free. I mean, you sign up for the newsletter, so if you go to scalingposgres.com courses, you can click on it and submit your email and personal last name. I'll also put it in the show notes for this episode, but basically it's a course that kind of gets you started doing postgres performance stuff. But it's very basic if you've ever done any performance work from the Postgres perspective. But if you've mostly just used your frameworks console to access the database, it just shows you some pointers on how to get started. So the first lesson covers PSQL, which is the Postgres command line client. It covers PG StAT statements and setting that up to be able to track what statements are running in your system and then explain, which helps you understand how the query is being run by the system so you can optimize it. And I show an example of doing that as a part of the course. And really this is to get anyone who's interested in taking my more comprehensive, in depth course that's coming up. That's kind of why I released this. Now, the more complex course that's all about postgres performance optimization that's coming. I'll have some more announcements on that later. But basically that course, I'm planning the start date around the end of January, but I'll have an opportunity to get in early if you're interested, around the Black Friday Cyber Monday time, which for those outside the US, that's basically after Thanksgiving around like NoveMber 24 or something like that. So that's a lot of stuff going on with me. How about you? [00:02:36] Speaker A: So still just busy. Mental exhaustion, because we've got the code freeze coming up December 15, which means if you're a new developer and you're not sure what that means at a lot of bigger corporations, especially, they will freeze development and not allow any new production deploys other than emergencies after a certain time during the holidays so you don't get caught out in the holidays with a bunch of problems and things. [00:03:04] Speaker B: Can stay in the cold. [00:03:06] Speaker A: Yeah. So we've got that coming up. But what that means is we have to push to try to get our yearly stuff all finished up by then so that we can then take a deep breath. But the next month is really pushing hard. Not to mention the fact that we just had that big acquisition, double the size of our engineering staff. We're trying to get all these people in at the same time. And so it's just like driving a race car 300 miles an hour down residential roads and trying to miss all the stuff that you got to miss at 300 miles an hour. So it's pretty frantic. Lots of late nights and meetings and all the stuff. But we're making good progress. So that's good. But, man, it's just exhausting. I'm going to be ready for a break after the code freeze. I'm going to take some vacation time and turn the gray matter off for a while. So anyway, good, but busy. All right, so websockets and polling and long polling and server sent events. So let's do some definitions first for people who may not be familiar with what those things are. So polling is basically kind of the oldest way to do this stuff, which is, and simplest. Yeah. The client basically says, hey, server, you got anything for me? Hey, server, you got anything for me? Hey, server, you got anything for me? And just does that forever over and over and over again. It's kind of like a heartbeat thing. Websockets are a permanently open pipe, or at least as permanently as you're on this page, that opened it between the server and the client, and they can just talk whenever something happens. So the client doesn't have to constantly ask. It asks and says, okay, server, anytime somebody sends something that I want to know about, just go ahead and shoot it to me. Don't make me ask you for it. And that'd be something like, think of like a chat application where you get in the chat room, you've opened a pipe to the server, and anytime somebody sends a message to that, you're subscribed to that chat channel, and the server says, okay, here's a message that just got sent from wherever it got sent, and you don't have to constantly ask it for it. [00:05:53] Speaker B: And it's also bi directional, meaning that the client can also, when it sends its chat message up to the server, the server, it doesn't need to make a connection. It's already there. It just says, hey, this is what I said. And then the server multiplexes it to everyone else on the channel, right? [00:06:10] Speaker A: So it's kind of a pub sub thing. Publish, subscribe. And that was kind of a newer way. Then there's a concept of long polling, which, why don't you explain that? [00:06:26] Speaker B: So it's basically the client, just as a typical request of hey, you got anything for me? Or hey, server, do this thing for me. And the server says, okay, hold please. It does its work. So it leaves the connection open. It does its work, or it does waiting or whatever it's doing. When it finally has something that it needs to deliver, it says, okay, here it is. As soon as that gets delivered, the connections dropped. So it's really for delivery, a wait time. Like hold please, let me. It's kind of like going to McDonald's. Once you get your meal, the transaction is done right. [00:07:06] Speaker A: The big disadvantage of that is when it puts you on hold, it usually doesn't play nice hold music for you. So you're just sitting there waiting. All right, so then we've got this thing called server sent events. So go ahead and explain that one. [00:07:22] Speaker B: So that one is where the client makes the connection to the server and the server keeps it open. So it's like long polling, but as opposed to just delivering one time and then dropping the connection, it can send as many events as you want over the wire to the client and the connection just stays open. And this is interesting for doing real time updates of like maybe you're having a graph that dynamically builds stuff, you can constantly just keep those server sent events coming because really when you're updating a graph, you don't need the duplex nature of a websocket. Like the client doesn't constantly need to be sending something, it's just getting a stream of information. [00:08:06] Speaker A: So like if you're ticker update, a stock market update for day traders, they've got this stuff on their screen and they're just receiving the information. They don't need to tell the server anything, they're just trying to get. [00:08:18] Speaker B: I would say that's a good use case for server sent events. [00:08:23] Speaker A: So those are very much like websockets, except the client can't talk back is the primary difference. [00:08:30] Speaker B: And the other thing to keep in mind with websockets is an entirely different protocol than HTTP. So all the things we mentioned polling, long polling, server sent events, those are all part of HTTP. You're not using a different protocol. Whereas WebSockets is an entirely separate protocol, right? [00:08:52] Speaker A: So when I first started web development, the thing was, if you needed to have updates, consistent updates, you did polling. That's just what you did. Because there weren't really any other options at the time. That was back in the day of the clay tablets and stuff. So then came along the websockets stuff. And I mean, websockets had been around for a while, but they weren't really used in commercial production environments for quite some time. So when those started to be really available and used quite a bit, when you put those things on the whiteboard and you say, okay, if I'm doing polling, I'm sending 60 transactions per second up to this thing, thousands of transactions every hour, and I've got all this traffic going through, but if I do a websocket, then I've just got one request up per client instead of thousands per client. So I'm going to dramatically reduce traffic. So no brainer use websockets, right? But what it turns out is that in the field, you still run into people using polling instead of websockets. So why is that? Do you think so? [00:10:22] Speaker B: I think the main reason is just simplicity. It's super simple to do it in your JavaScript, for example, you just put it on a timer and just request from the server. Either it's going to give you as a payload, an HTML file or JSON, or even JavaScript. Sometimes rails does, and you just take that and you do something with it, interpret it or update whatever you need to do. So there's no additional technology to build in. You don't have to worry about another protocol. Websockets or transition to a different protocol and have different firewalls or have different technology, it's kind of just all built in to do it. I think that's the number one reason why people still do it is just simple to do. [00:11:08] Speaker A: Right, which makes sense, especially if you're in a situation where, okay, I'm just kind of putting up a minimum viable product, a proof of concept here. There's no reason to go into all the Websocket stuff. Let's just get it working, then we'll reevaluate what we need. But setting aside the difficulty, and I don't want to say difficulty, it's just more involved than just doing polling. But most languages, web development languages, now have the facility to set up Websockets, and they can walk you through it. It's not horrific, but it's more involved. But the other problem is and this is the thing that you didn't see on the whiteboard, or at least we didn't back in the early days of websockets coming out. We're sitting there going, hey, it's a no brainer because it's going to dramatically reduce my traffic. But the problem is it dramatically increases your processing overhead. And that is because why leaves connections open. [00:12:23] Speaker B: And all of the technologies other than just simple polling leaves connections open because a lot of web developers, particularly those who do use Ruby on rails, are so accustomed to the server process being very temporary. Like you just send a request to the router, it goes to the controller, does this thing returns the data, it's done, job is done. That's not how it works. When you're doing long polling, server send events and websockets, it has to maintain that connection for however long it is. If it's long polling, that Ruby process is going to be running until it finally delivers the payload or whatever it is to the client or server send events, that Ruby process is going to keep running around until it's closed. Same thing for Websockets. So that's the big. [00:13:24] Speaker A: Dev. Well, in life, nothing is free. You're going to pay for it somehow, and in this case, you're going to be robbing Peter to pay Paul. The question then becomes, all right, am I at a point where the traffic is more problematic than extra processing? Then you got to think about costs. You got to think about loads on your servers, because if you got to have 1000 times as many processes running, that's going to get expensive. If you're in the cloud, as we've talked about before, cloud ain't cheap, so there's trade offs to be had. So am I at a point with my polling traffic that it's problematic if it's not switching to websockets? Just because that's the cool toy may not make sense. [00:14:19] Speaker B: You may introduce new problems because now you have to maintain. It depends on the back end server technology, but now you have to maintain additional processes, and that has a memory impact particularly. That's the limitation you might hit first and also a computational impact. [00:14:42] Speaker A: Now, long polling, sorry, go ahead. Long polling can kind of split the difference there. It gives you some of the benefits of both of those where I don't have to send. If I've got a thing that happens, let's say irregularly, but it's minutes apart, right. I don't have to pull every 10 seconds to find out if it happened. I just pull once and then when it happens in three minutes or five minutes or ten minutes, it sends it back and closes the connection down. [00:15:14] Speaker B: But just keep in mind that whole minute you have a process dedicated to that one user waiting for something exactly. [00:15:21] Speaker A: Right, but it will eventually close down. [00:15:24] Speaker B: Sure. [00:15:27] Speaker A: I mitigate some of the issues with the websockets, but then I don't have the constant communication going. So if I don't ask again and some other event happens, I won't know about it. All right, so how does server sent events fit into that? Does that buy me anything over webSockets? Why would I use that instead of a websocket? What difference does that make? [00:16:03] Speaker B: So if you only need to send down events from the server again, like think stock Ticker, you're just sending updated information. It's a great use case for that. You can use the HTTP spec so you don't have to do a separate websocket server if you're needing to do that. So it already handles it as part of the HTTP spec. I think there's less infrastructure involved with implementing that. But you know the downs, but it's still the downside of keeping a process open because I was looking at that, I was looking at a way to proxy things such that if I want to support 1000 clients, I don't want to have to have 1000 Puma threads running, right. One dedicated to each person. Can I do 100 Puma threads that then help supports 1000 clients? And I couldn't figure out how to do it. Yeah. [00:17:14] Speaker A: So if anybody knows how to do. [00:17:16] Speaker B: Literally, yeah, if you know how to do that, put it in the comment. Now there is a way to do it with websockets, I'll mention in a second, but it's not kind of built into the rails ecosystem. It requires a separate library because I was using this and it kind of trying to use the server events for a use case and it just kept the connection open on the controller. So it's like you're literally running a controller action for potentially minutes or it's staying open waiting for something. So it was like what? I'm going to have all these, I'm so used to things happening super fast and delivering and being done. So this will probably make your APM metrics all wonky if you start trying to do some of this stuff righT. Because suddenly your controller action is taking minutes to complete. Yeah. [00:18:15] Speaker A: And that always hurts when you see that. Yeah. Most of what I used in my career was either the polling or the websockets. I didn't really deal with the other. I used long polling a couple of times, but for specific situations, but mostly it was one of those two. And honestly, when I was still doing front end stuff, I don't have to deal with that anymore. So this isn't a problem for me. But it's an interesting thought experiment, because when you lay these things out on a whiteboard, and this is kind of educational for a lot of how to think about Dev architecture and what you're doing and the ramifications it has. When you lay these things out on a whiteboard, it looks very obvious which one you should use for any given situation. I would just always use websockets. Right. If I'm just looking at a whiteboard, websockets, why would I ever use anything else? But there's a lot of hidden costs to those things that you don't think about. And that's instructive, because when you're developing, when you're writing stuff, one of the mistakes I see people make a lot of times is not thinking ahead to, well, what ripple effects is this going to have if I write this this way? What is it going to do to the infrastructure, the servers? I have to put it on the things I have to deal with the technology outside of this piece of code. And I think it's a good example. This is a very simple example of things like that, because once you took this off the whiteboard, it was really easy to see that, oh, I've got process issues here. Yeah, I've eliminated my traffic problem, but holy crap, this just got 1000 times as expensive or whatever it was. And my APM is having a fit on the processor side now instead of the traffic side. [00:20:39] Speaker B: Yeah, so this is now there in terms of talking about the websockets for a second here. We predominantly use Ruby on rails and they do have action cable. And as I was looking at that, people were reporting it works okay once you get up to about a thousand clients and then it starts kind of like not working as well. And I wonder if it's because of this keeping connection open business. Unfortunately, I don't know because I actually never committed action cable because the requirement from a particular client was, hey, we want to support 5000 clients. So I'm like, okay, well then based on what I'm hearing, I shouldn't do this, but I found this other solution called Action Cable. And what that actually does is it doesn't use Ruby, it uses go for the WebSocket connections. So I think it may be acting, it kind of acts like this proxy that I was talking about. So it can support thousands of connections, but yet you don't need as many Ruby processes to do it. It kind of handles it some way. [00:22:00] Speaker A: Right. [00:22:00] Speaker B: So do other frameworks have that requirement? I have seen a lot of people use like Socket IO, which might do something similar based upon other frameworks that you're using, but it's definitely a problem. And I have seen like elixir says, it can support a million Websockets on a single server. [00:22:27] Speaker A: Wow, that's a bold claim. [00:22:30] Speaker B: Well, they did implement with what they showed. Now the question is, okay, how much memory was each process using, how much CPU work was actually being done in the maintenance of it and whatnot. [00:22:44] Speaker A: Right. How big is that server? [00:22:47] Speaker B: The answer is it kind of depends. [00:22:49] Speaker A: Right. [00:22:50] Speaker B: But then this is where it gets interesting, something we haven't talked to. Another disadvantage is that, okay, let's say you go with the Websockets and then suddenly you want to support, let's say you're using something that a single server can't handle, 5000 Websockets. This is just a theoretical argument. All right, well then you need another server. Well then the problem is so many load balancers can't do Websockets. Or I've actually seen where you actually need to then implement a message broker and things start getting complicated really fast. And there are all these providers that say, hey, we do the websockets for you and so then you're paying the money to do it. Or any Cable. Like I was given an example, they have their pro version and it's basically, this is not necessarily what I think they have said, but my interpretation. All right, well we'll probably have to start paying for that if we ever need to go multi server with any cable. [00:24:05] Speaker A: Right. [00:24:06] Speaker B: Just because the complexity has, it's not like bringing up another web server that has ha proxy. It's not that easy with your typical rails application where the client's just sending something. Because then there's also problems like in terms of load balancing. I remember someone writing a post, sorry, I couldn't find it again, but they said, all right, well we're going to bring up a new WebSocket server because they have more traffic coming in, but only new people would go there because there was no way to without again additional technology to do it, to redirect existing users to go to a different websocket on a different server. [00:24:57] Speaker A: Right. And that was something I ran into early on when we started using websockets we got the websockets all set up and they were working great. But then we kind of outgrew things and needed to set up load balancers and distributed processing and stuff, which, like you said, is easy to do with a request response cycle because. [00:25:24] Speaker B: Exactly. [00:25:25] Speaker A: I'm done after I send the request and get the response. But when you start trying to implement that with websockets, it got really confusing because we were like, oh, wait, we can't just send this pipe over to another place. It has to stay here as long as this person is connected. So now I've got to worry about, okay, if this part of my load balancer gets full, I may or may not get that back anytime soon. And if, God forbid, something were to happen and it goes down, I can't just move those connections over. The client has to reconnect, which. [00:26:10] Speaker B: Reinitialize things. I mean, it can be done, but it's more disruptive for the client. [00:26:15] Speaker A: Right? [00:26:16] Speaker B: Whereas if you were using polling, the state isn't going to be as significant because, like, imagine you're doing a chat app with a websocket. You probably have a whole chat history of what's happening. But if that server suddenly goes down, I bet most of that chat history may be erased because it's had to connect backed up as if it was a new client into something. [00:26:37] Speaker A: Right, which you run into, even with, like, Zoom or Google. I forget which one does it. But if there's chat going on and you join the room, you get chat from the first time you got one. And then if you get disconnected and come back in, you don't see what you saw before it starts fresh. [00:26:57] Speaker B: Yeah. Part of the reason I actually wanted to talk about this, or I suggested this as a set of topics is that for a client, I actually reversed their websocket usage, and I went to straight old polling. So, because the request of this particular client was, hey, we want to be able to support 10,000 connections. So I started thinking about, it was like, okay, this may mean having multiple websocket servers getting this additional technology to be able to handle this stuff. And then I said, could I do this with polling? Because essentially it just needs. Well, first I tried server sent events, and then that didn't fly at all because of the process having to stay open. So I'm like, well, good Lord, I'm going to need 10,000 processes to handle this. Now, I could spread it out to multiple boxes. So that is, I think, would that be. It'll work easier it might be easier to use that than the websockets, but still, you're going to run into that problem. So I just basically said, okay, can I poll to be able to do this? And they wanted something real time. I said, yeah, but does it really need to be real time? This was mostly me talking about it as opposed to talking with them. How important. So what I basically did is I set up polling, set it as polling to say, hey, has something changed? Hey, is something. And I did it. I'm trying to think what it was configured. It was like every, might have been something like every two to 5 seconds. So it's a random value. How often it pulls every time between two to 5 seconds. So as opposed to all clients poll, no poll. It was just more distributed when the server was being hit. And what it was returning was a JSON that had two values in it. So the payload was super small, and I made the controller as efficient as possible, and it basically returned in like, oh, gosh, what was it? I think it may have been like two milliseconds. [00:29:41] Speaker A: That's pretty good. [00:29:42] Speaker B: On average. It was just do the simplest thing. Just say, hey, is an update needed? And then if an update is needed, then it does a separate connection to say, okay, give me what changed with that. And then I did the scale out and put multiple servers up and then was able to hit the 10,000 requirement in the testing. [00:30:07] Speaker A: Right. [00:30:08] Speaker B: So I kind of looked at the prospect of doing the websockets. I said, can I go back to just using other tech? And it worked. And the client really didn't see it was real time enough. Put it that way. [00:30:29] Speaker A: Right? Yeah. [00:30:30] Speaker B: That delay of maybe two to 3 seconds, and then it sends a request and gets the drawdown. That was enough. Real time enough. Yeah. [00:30:41] Speaker A: Right. And that's another thing to think about, too, is that the non polling strategies, the other ones, they are scalable, but the scalability of those things gets very complicated very quickly, especially the full Websocket one. So, I mean, you can throw tons of money at it and tons of time and teams at it, and you can scale it to 10,000 or 100,000 or a million, but my Lord, you better have a lot of money and a lot of time if you're going to do that, because it's complex. [00:31:19] Speaker B: Yeah. Or what I would have done is paid a vendor to handle the multi server websockets implementation. [00:31:29] Speaker A: Right. But I can tell you from what we're going through at my job, that can also get extraordinarily expensive. Very. I'm not saying it won't because we've got the same problem. A lot of our stuff was built on WebSockets, and we're actually going through and pulling some of that back to just polling because the expense of dealing with that many users on Websockets is just too high and it's not buying us enough to justify that kind of expense. When there was 100 users, sure it was great, but tens of thousands, yeah, that's when you just can't do it. [00:32:11] Speaker B: Yeah. Here's where it starts to fall down, is that you're dedicating a process one to one per person, whereas with all the other, with polling, for example, I could only have 50 processes that are serving 1000 users. [00:32:32] Speaker A: Right. [00:32:33] Speaker B: And the differences, that's just an example, but it could, depending on how infrequently you poll, you could support even more and more users for a given number of CPUs or whatever. [00:32:48] Speaker A: It's much easier and cheaper to scale traffic than it is to scale processes, especially when you got to be careful about where those processes are and what they're connected to. [00:33:03] Speaker B: It still affects me emotionally thinking, sure, 98% of the time we're poll pole just doing absolutely nothing. So it's an affront to. I think we're wasting clock cycles. But at the end of the day, there are cases where it's a cheaper option. [00:33:26] Speaker A: Right. Cheaper, easier and better for the user in a lot of cases. Because if you're doing polling and you lose a connection for 5 seconds, so what, it just picks back up where it was, you know, nothing. If you lose a connection with a webSocket, somebody better have written that very well, so that it knows, hey, I lost this connection. I need to put it back so the user isn't confused, which is again, kind of hard to do anyway. If you're going to do WebSockets or long polling or server sent events, we're not saying they're bad. They are good. They have uses, very good uses. But think about things like cost and scalability. When you get to an app size that's more than 1000 users, you're going to start thinking, oh gosh, maybe I should go to polling. [00:34:30] Speaker B: Yeah, this is one of those things that your technology decision can really cause pain later on if you have a successful app. [00:34:41] Speaker A: Yeah. So my recommendation would be WebSockets sound great and they really are. But make sure you really understand how those things work, or the server sent events or the long polling, understand how they work so that you understand what you're going to get yourself into if growth happens. [00:35:00] Speaker B: Yeah. And if you need them, right? [00:35:03] Speaker A: Yeah. Start with the polling. Evaluate your traffic and see if it's worth the effort and expense to switch to something else just because it sounds better and doesn't offend my sensibilities of, hey, we're wasting a lot of traffic here, but it's better than wasting a lot of money and time. [00:35:22] Speaker B: I mean, I think chat is an example that websockets are really good use case for because not everybody is going to be doing a real time chat. Like if you think of a support app, I would think given the amount of customers that supported looking at documentation versus actually chatting with someone online, that's a very small portion of what the server does. So they're dedicating websockets for that one singular purpose of chatting with someone you're not going to have. Well, it depends on your scale, but you're going to have many more requests coming in for documentation, other types of things for the application compared to chatting with someone. So I think there are certain use cases that the chat is a no brainer because, you know, things are going to be relatively happening pretty quickly. So dedicating a process to that person chatting probably makes sense. [00:36:24] Speaker A: Right? And things like long polling, like I've seen them used for, hey server, run me this report and I'll just keep my long poll to let me know when it's done instead of turning around and sending me an email with a background job or something. It's also valid, but that's a reasonable use case for like a long poll because I only need to do it once. And then when that report is done in a minute, minute and a half, you release that connection, we go on. I'm not asking for those every second. So that's usually not a scalability issue. [00:36:57] Speaker B: And what I do with that is I still just use straight up polling. I just like every 510 seconds say, are you done? Okay. And then when it's done, it just updates something that says, okay, your report is ready. So I still use poly for that. [00:37:12] Speaker A: Right. And so I think a lot of this too. What you need to think about is what is the user experience going to be like? And are any of these things going to help me improve the user experience for my users? Because if regular polling gives the user a great experience and doesn't kill your server, why would you do something else? You need to have a reason, and there are reasons, but make sure you have one anyway. Hope you guys enjoyed that. That was a fun combo if you did. Hey, give us a thumbs up. Subscribe Share this with your friends. Ding that notification bell. [00:37:58] Speaker B: Also, let us know if you've run into issues using things other than polling. If you've fallen back to polling in some use cases or you're like. Or if you say you guys are nuts, Websockets is the jam. I love it. Then let us know in the comments. [00:38:14] Speaker A: Yeah, give us some comments. We like to learn from you guys, so make sure you do that. If you prefer to listen to this as a podcast while you're jogging or swimming or mountain climbing, whatever your jam is, you can find us on pretty much any podcast provider. We're at pretty much all of them. Also, we've got short form videos on there. We have a team of folks that takes our long videos, our shows, and cuts them apart into little segments that you can watch or listen to quickly. So check those out on the channel and we will be back next week with some new stuff. We haven't decided exactly yet, but we'll get that out there. Don't worry, we'll have something. So until next week, happy programming.

Other Episodes

Episode 30

February 17, 2022 00:34:23
Episode Cover

How To Get Started Coding | Rubber Duck Dev Show 30

In this episode, we discuss how to get started coding. TIOBE Language Popularity Index

Listen

Episode 74

February 16, 2023 00:56:54
Episode Cover

Getting Stuff Done With Drew Bragg | Rubber Duck Dev Show 74

In this episode of the Rubber Duck Dev Show, we discuss how to get stuff done with Drew Bragg. Drew Bragg

Listen

Episode 60

October 06, 2022 01:02:13
Episode Cover

A Beginner's Journey with @CodeWithJulie | Rubber Duck Dev Show 60

In this episode of the Rubber Duck Dev Show, we discuss a beginner's journey in terms of getting started with programming with our guest...

Listen