This fits what you’re looking for:
In the UK the BBC is hugely pro-establishment but their reporting on world news is fantastic.
Do I just go in the order of the links you posted in the previous reply?
Yes. Get a working camera feed and go from there. For that, tackle the hardware side first - Pi, camera, power/ethernet, case, storage for the OS. Then install the OS and the camera software and test it. Mine are all indoors so you’ll have to see what kind of cases are weatherproof if you’re using it outside.
Also just to make sure I understand correctly - at the end of it I should have a camera setup that I can access, via VLC, from the device of my choosing over ethernet/intranet?
Exactly. VLC will be fine if you only want to view one camera. If you want to add more, do recording/motion detection, view them in a browser, etc. then MotionEye on a server works but there are other options. I know that the Synology NAS’ DSM OS has its own solution for managing all that stuff.
There are plenty of guides but I just took it step by step. The links I provided have instructions for each bit of software needed. You’ll need to be able to do things like flash the OS to a SD or USB drive and then be able to ssh into the Pi to install the camera software. Start here: https://www.raspberrypi.com/software/
There’s no programming skill needed but you should be comfortable with using the terminal, or at least be willing to learn. You don’t need to install a OS with a desktop, everything is done via the terminal.
After that’s done you can use VLC to view the feed and check it’s working before installing motioneye on a server. You just get the IP address of the camera and give the URL to VLC like this: rtsp://xxx.xxx.xxx.xxx:8554/h264
If you look at the whole thing in one go, it’s overwhelming, but if you break it into chunks it’s not too bad and it’s a good learning opportunity, if that’s your thing.
It’s not too difficult, I figured it out and I eat crayons.
Here’s the software I use but there are other options: https://github.com/BreeeZe/rpos - That runs on the camera Pi and provides the video stream.
I use a Pi, a camera module like this https://www.raspberrypi.com/products/raspberry-pi-high-quality-camera/ and a suitable lens. You can get cheaper camera modules, IR modules, etc.
Also, something like this to power it: https://www.tp-link.com/us/business-networking/omada-switch-unmanaged/ds105gp/ You could just use a regular switch and power the Pi with a power adapter if that works better. My cameras are all ceiling mounted so having one cable for data and power made sense for me.
I use this to split the ethernet into power and data when it reaches the Pi: https://www.amazon.com/UCTRONICS-PoE-Splitter-USB-C-Compliant/dp/B087F4QCTR/ref=pd_lpo_d_sccl_2/130-2310467-3870744?pd_rd_w=l0O0u&content-id=amzn1.sym.4c8c52db-06f8-4e42-8e56-912796f2ea6c&pf_rd_p=4c8c52db-06f8-4e42-8e56-912796f2ea6c&pf_rd_r=TNA6SF008RVJ5A1Y5V97&pd_rd_wg=4ITEg&pd_rd_r=e6c424de-42a7-4d27-974f-3f129d2bdd02&pd_rd_i=B087F4QCTR&th=1
Then I have this running on a Linux VM to collect the camera feeds and display them in a web browser: https://github.com/motioneye-project/motioneye
You’ll also need a case, my solution was to buy a metal Pi case and mount the module onto that, feeding the ribbon cable back into the case.
If you decide to go ahead and need help, just ask.
I’ve had four cameras running for a few years, streaming over RTSP and powered over ethernet. Works well!
Yorkshire Tea Gold, a little agave and a splash of milk.
What size is the membrane separating one point in time from another? If the membrane is the size of the observable universe we wouldn’t see a difference. If it’s the size of your living room you’d be fucked because your living room only exists at any given point in space time for a very very short time.
Good call.
After the diner scene in Reservoir Dogs when Little Green Bag kicks in.
deleted by creator
I’d like to turn him off now.
Just as well. If you turn him on he offers to buy you a pony.
Braking during impact is the worst thing you can do.
This is not correct, where are you getting this from?
Deer on the road is an edge case that humans cannot handle well.
If I’m driving at dawn or dusk, when they’re moving around in low light I’m extra careful when driving. I’m scanning the treeline, the sides of the road, the median etc because I know there’s a decent chance I’ll see them and I can slow down in case they make a run across the road. So far I’ve seen several hundred deer and I haven’t hit any of them.
Tesla makes absolutely no provision in this regard.
This whole FSD thing is a massive failure of oversight, no car should be doing self driving without using cameras and radar and Tesla should be forced to refund the suckers customers who paid for this feature.
I’m not sure you understand what AGI is, and why we’re not going to invent it any time soon.
I recently moved to Affinity Photo with no complaints but I’m not a power user.