Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror

Comment Re:Thank you, taxpayers (Score 1) 108

Permits, Drawings, restoration, meeting the requirements for the right-of-way access and dealing with the glacial deposits of rocks etc, trenching can take over a year to return to the original condition, not accounting for the fact that the soil may not settle back the same way if there's any moisture.

I can tell you that boring is the standard by which you optimize many of these items.

Pole attachment costs can be the same or more than underground/boring if you need to upgrade or replace the poles, and it can take up to 180 days to get all the utilities on a pole to relocate, hence why google wanted one-touch make-ready rules to become the norm. They're not wrong, but the issue also is many people are illegally attached to poles, and in some of these rural areas the poles are actually the ORIGINALS from the REA expansion dating prior to the 1940s.

Comment Re:Why not use amplified WiFi for half a mile? (Score 1) 108

The goals of the county gap project and funding was to provide service to these areas. The problem is the tower may be 2-3 miles away and to hit these speeds, it requires more spectrum than is available, even if you use the RF Elements horn based antennas.

If the farm properties along the way are subdivided they can be connected for much lower cost in the future with this.

Comment Re:Starlink... (Score 5, Interesting) 108

but that fiber run is a much better investment long term, as the max data transmission of the fiber line itself is much higher than the 1gb currently offered, and all that is needed is upgrade it is better fiber transmitters and receivers at each end, as long as the ISP can also handle the increased bandwidth. As the national and global networks improve, so could the existing fiber infrastructure.

There's also this thing known as a "pole denial" - aka no, you can't attach to that pole, which requires then doing something else, either setting your own pole or doing something alternative as a result. Just like mixing technology or environments (eg: Ubuntu vs Debian, or worse a RPM vs DPKG or Windows vs *BSD) having a mix of construction types can make your life more complex. I'm trying to optimize a lot of variables at once.

Comment Pay the man, Silent Bob... (Score 1) 94

I'm an actual Starlink user at my farm. It's head-and-shoulders better than any competing service.

I previously has used a cellular uplink... and even with a yagi mounted 30' up on a mast, I barely had 1-2Mb/s of bandwidth. It was truly miserable.

Starlink is a game-changer... give 'em the freakin' money. They've done something truly miraculous for rural internet users, who had previously only terrible/expensive options. As a taxpayer, I'm actually glad to see the money I contribute going to something useful.

Comment Re:standard plug is need and no 3rd party repair l (Score 1) 85

CCS can go up to around 400kW. Well, actually I think it is 500kW now. Which is 1000VDC x 400A or 500A.

Most BEVs canâ(TM)t go that high. In fact, I think there are only one or two that can actually max out current 350kW chargers for any decent amount of time. Neither of Them T

-Matt

Comment Re:Feeding stations... (Score 1) 85

Yes, but nobody fast DC charges to 100%. The charge rate drops modestly past 60% and precipitously above 80%. So people only charge to 60-80% and no more. Usually 30-40 minutes max. And if your final destination is close and destination charging is available, only enough to get there. So for trips just beyond the vehicle range, the charging stop can be very short, like 10-15 minutes.

At home, or at a destination, people will charge to 100% overnight if they will be taking a long trip the next day, and otherwise only charge to 70% or 80%. Unless itâ(TM)s a model 3 with a LFP battery, in which case people charge to 100% overnight.

-Matt

-Matt

Comment Re:Feeding stations... (Score 1) 85

Yah. The connector standard has settled down, which is good. Chargers are typically only able to do AC or DC anyway, not both. CCS on the vehicle allows both J1772 (AC only) and also has the extra pins for high amperage DC.

L1 (120VAC) 11-16A (in vehicle charger)
L2 (240VAC) 24-80A (in vehicle charger)
L3 (was never implemented)
Fast DC charging, direct DC to battery, dynamically managed up to 1000VDC and 500A.

Limited by the lower of what the external unit can supply and the vehicle can accept.

Chademo is being steadily removed. The cable standard was too limited. So if you own an old leaf, you need to start carrying around an adapter.

-Matt

Comment Re:How about at highway rest areas? (Score 1) 85

Iâ(TM)m sure it is looked at. The bigger fast DC chargers have to be located near fairly hefty distribution lines (several thousand volts AC is preferred), in order to be able to situate a sufficient number of DC supplies at a location. A DC fast charger outputs 300VDC to 1000VDC based on the vehicleâ(TM)s battery pack requirements, and up to 500A. All dynamically controlled via continuous communication with the vehicle.

-Matt

Comment The basic premise is already not scaleable (Score 1) 209

"In a Substack article, Didgets developer Andy Lawrence argues his system solves many of the problems associated with the antiquated file systems still in use today. "With Didgets, each record is only 64 bytes which means a table with 200 million records is less than 13GB total, which is much more manageable," writes Lawrence. Didgets also has "a small field in its metadata record that tells whether the file is a photo or a document or a video or some other type," helping to dramatically speed up searches."

Yah... no. This is the "if we make the records small enough we can cache the whole thing in ram" argument. It doesn't work in real life. UFS actually tried to do something similar to work-around its linear directory scan problem long ago. It fixed only a subset of use cases and blew up in only a few years as use cases exceeded its abilities.

The problem is that you have to make major assumptions as to both the size of the filesystem people might want to use AND the amount of ram in the system accessing that filesystem.

The instant you have insufficient ram, performance goes straight to hell. Put those 13GB on a hard drive with insufficient ram, and performance will drop to 400tps from all the seeking. It won't matter how linear that 13GB is on the drive... the instant the drive has to seek, its game-over.

This is why nobody does this in a serious filesystem design any more. There is absolutely no reason why a tiny little computer (or VM) with a piddling amount of ram, should not be able to mount a petabyte filesystem. Filesystems must be designed to handle enormous hardware flexibility because one just can't make any assumptions about the environment the filesystem will be used in.

This is why hierarchical filesystem layouts, AND hierarchical indexing methods (e.g. B-Tree/B+Tree, radix tree, hash table) work so well. They scale nicely and provide numerous clues to caching systems that allow the caches to operate optimally.

-Matt

Comment Re:It's mostly about the metaphor. (Score 1) 209

Yes, you can still have trees with an object store. The object identifier can be wide... for example, the NVMe standard I believe uses 128-bit 'keys'. Sigh. Slashdot really needs to fix its broken lameness filter, I can't even use brackets to represent bit spaces.

So a filesystem can be organized using keys like this for the inode:

parent_object_key, object_key

An this for the file content:

object_key, file_offset|extent_size

For example, a file block could easily be encoded as a 64-bit integer byte offset, with a 63 bit positive offset space, and an extent size encoded in the low 6 bits (radix 1 to radix 63, allowing extents up to (1 63) bytes. Since the low 6 bits are used for the extent, the minimum extent size would be 64 bytes. The negative key space could be used for auxillary records associated with the file or directory. HAMMER2 uses this very method to encode its radix trees, allowing each recursion to use a variable-sized extent and to represent any 64-bit sub-range within the hash space (but H2 runs on top of a normal block device, it doesn't extent the encoding down to the device).

A set of directory entries could be encoded as follows, where [object_key] is the inode number of the directory.

object_key, filename_hash_key

Though doing so would almost certainly not be optimal since directory entries are very small.

Inode numbers wind up just being object keys.

This is readily doable... actually, this sort of methodology has been used many times before. I did a turnkey system 20 years ago that used this method to create a simple-stupid filesystem for a NOR flash filesystem.

The problem with this methodology is that if done at the kernel/filesystem-level, it requires the underlying storage to directly implement the key-store, as well as to support the key width required by the filesystem.... which seriously restricts what the filesystem can be built on top of.

-Matt

Slashdot Top Deals

Whatever is not nailed down is mine. Whatever I can pry up is not nailed down. -- Collis P. Huntingdon, railroad tycoon

Working...