Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror

Comment Re:The Mac OS X version is unsigned. (Score 3, Informative) 117

That is surprising, I agree.

An alternate suggestion if you want to keep the existing System Preferences: right-click ('ctrl-click') on the binary to bring up the context menu, then select 'Open". This will invoke the same warning, but will also allow you to authenticate -- allowing this binary to run (here and thereafter) without complaining.

Comment Re:Xorg on *BSD (Score 1) 124

I have an Asus EEE PC (900A) with NetBSD 5 that runs the stock X.org and uses the kernel Intel DRI driver (i915drm) for accelerated 3D performance -- pretty good given the hardware. There are DRI drivers for Radeon that I've also used, haven't looked into Nouveau. So the 3D support foundation is there, but the hardware pickings are still kinda slim.

Besides basic 3D acceleration, the continual 'catchup game' with desktop BSD is the explicit coding for Linux on the part of the big open source desktop environments, example: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Flive.gnome.org%2FPortabilityMatrix

Comment Re:Minix (Score 1) 120

Thanks for a very thoughtful and informative post!

Where I started in this thread was posting a 1.2 MB kernel RAM footprint in vanilla NetBSD/ARM. This is with UFS/ext2/msdos filesystems, tcp/ip networking, NIC, USB standard devices (bulk storage, audio, etc.) loaded. It doesn't sound terribly far off at all. Outside of the XIP and nommu advantages, which are very significant, I'm actually curious whether it would boot in 2 MB with a minimal userland. The SoC hardware has 32 MB, so I've never bothered editing the memory map. I'll trim it further as your reference did and see what it'll do...

On the Linux side, I'm really impressed at how nommu and many other patches have integrated into mainline. I always expected the bulk of uClinux to be forever a separate patchset. It's great to see.

Comment Re:Apple is killing text messaging (Score 1) 355

SMS's aren't particularly cheap. Also note that the message will travel over wireless 802.11 when available. It's cost effective, since I can sign up for a fairly minimal SMS plan and know that most of my messages are being carried by my devices' Internet access. It also means that I can receive messages even when I don't have cellular service but do have wireless Internet access.

Comment Re:Minix (Score 1) 120

Thanks, appreciate the link. But it sorta makes my point:
- An allusion to a vapor product with a 3 MB RAM goal is far from showing a dmesg. :)
- The linked "TLK" project reads nicely but has more aspirations than code, AFAICT
- The included 'web browser' was a misstatement, clarified in a subsequent post.

I'm aware of (uc)Linux's lovely support for MMU-less systems. It was a considerable kernel fork; what I'm impressed with is how much of it has been integrated back into mainline. It's a pity that someone long ago didn't do for NetBSD what the ucLinux project did for Linux 2.6.x.

But the context here is Minix3 on vanilla x86, not microcontrollers. So as I said before, I'm looking for configurations of x86 or ARM running a modern Linux kernel (2.6.x or 3.x) w/ 2 MB RAM or 4 MB with busybox. I sound dubious but am genuinely interested if this is possible (and how far you have to go down the rabbit hole to get there).

Comment Re:Minix (Score 2) 120

Linux can be configured to run in 2MB of RAM and 2MB of flash or less. It can run in 4MB RAM with a full network stack, busybox, and several hundred K remaining for apps.

There is no other full featured free unix like kernel which can do that. Certainly none of the free BSDs.

I'll take the bait. Care to show a reference to running a modern Linux kernel w/ 2 MB RAM, or 4 MB RAM with busybox, on i386 or ARM? Busybox can do wonders for storage requirements (e.g. for NAND FLASH), but it doesn't help with RAM at all! I found 8 MB to be difficult enough (!), last I tried ulibc and busybox on i386.

Just as a point of discussion, generic NetBSD is smaller than generic Linux (e.g. Debian) on the ARM platforms I've been using. A line from top shows the latest (NetBSD 5.1.2) kernel RAM footprint on ARM9: about 1.2 MB, with numerous filesystems, NFS, networking, USB support, etc. built in. This includes all kernel modules.

    PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU COMMAND
        0 root 125 0 0K 1240K schedule 0:00 0.00% 0.00% [system]

Running 1 instance of httpd, sshd, and two user processes, minus 1.5 MB of buffers/cache, it's using 5.5 MB of RAM right now. Tightened up, it might boot in 4 MB but I haven't tried. Do note that this is with the standard (Net)BSD libc, all completely generic, 'out-of-the-box'.

Amazing work has been done to cram Linux into small places, but I think it's somewhat disingenuous to say that no BSD is close. I'd say NetBSD provides a rather interesting starting point.

Comment Re:Will Try it (Score 1) 102

Sure, you can. But UUIDs are the default behavior in many distributions (fortunately, 'blkid' will tell you the mapping between /dev entry and UUID). I'm not saying it's a bad system. My point is that the parent's complaint about naming conventions in /dev is silly -- there are generally good reasons for each system's behavior. The complaint distracts the discussion from what's actually important.

Comment Re:Will Try it (Score 1) 102

Coming from a user of both Linux and both FreeBSD and NetBSD, I can't agree with the absolutes of your post.

BSDs are never easier than Linux. Linux is more modern, while BSDs stick to "tradition" and take pride in keeping things complicated. Just read their manuals/handbooks and you will have a pretty good idea.

An alternate wording: "Linux engages in constant superfluous redesign, while BSDs stick to conventional but consistent, stable interfaces that are well documented". The fact that up-to-date man pages and well-written manuals/guides exist says a lot by itself. Linux's myriad of ever-obsolete HOW-TOs is a poor substitute.

One good example of a painful process in Linux that is easy in (Net)BSD is developing for embedded architectures. I can be typing away on, say, a (PPC) Mac or a (Intel) Linux or a (Sparc) BSD box, and can cross-compile a NetBSD distribution for an ARM single-board-computer with one line: ./build.sh -u -m evbarm release. I could replace 'evbarm' with 'alpha' or 'sparc64' or 'i386' or even 'vax' (!) and I would magically get a system built for these very different architectures, constructed from whatever system I want, all built from the very same code! I didn't have to rely on some vendor to package the cross-compiler or (very painfully) do it myself -- it's just a part of the basic NetBSD system. They got a lot of stuff right!

While I do agree -- "Desktop BSD" is still not where it should be for the traditional BSDs, Linux has a long way to go here too. However, PC-BSD has done a pretty good job of doing the basic grunt work that is otherwise sorely lacking. "PC-BSD is to FreeBSD, what Ubuntu is to Debian".

Your third partition on Linux will be /dev/sda3, but on BSD it will be /dev/ad0s3e (note that it numbers disks from 0 but slices from 1, and there is still the letter "e" for the partition inside the slice - isn't that simple?)

This is a red herring. (1) On my NetBSD system, the first disk is /dev/wd0a. That's not any different than Linux's /dev/sda0. (2) Who cares? Mac OS X shows my root file system as /dev/disk0s2, and you don't see everyone complaining that OS X isn't ready for the desktop! And sometimes the extra information can save your butt. Example: I have an old Sun workstation with 3 SCSI disks that has run Linux, BSD and Solaris at different points in its life. One of the more painful moments I experienced w/ this box was under Linux, when I removed a nonessential disk (mounted as /data) after removing its entry in /etc/fstab and unplugging it. I restarted the system -- and it would no longer boot -- because the /dev entries (/dev/sda, sdb, sdc, etc.) are enumerated in ad-hoc fashion, had reordered themselves, and the root drive was no longer where init thought it should be. In BSD and Solaris, the verbose naming corresponded to their physical locations on the SCSI bus, so you could pull a nonessential disk and it would still 'just work'. That prevented a ton of headaches.

These issues have been solved in Linux with unique UUID's, but now your entry for /dev/sda3 might instead say something like "UUID=1924d0d6-496d-4bbf-8fd1-aaaac6764bc5" in /etc/fstab. Good luck parsing the UUID to mean "3rd partition" unless your fstab file is well-commented! That makes BSD look positively friendly by comparison.

FreeBSD advocates spent a good portion of their time claiming that FreeBSD is faster than Linux. Maybe on servers under very heavy load. In all the tests I have made on a simple desktop, FreeBSD always felt a little bit slow, jerky, worse than Linux with a lightweight window manager such as LXDE (but not worse than Linux with KDE 4, for example).

You're confusing "faster" for "feels faster for interactive use". With all of the scheduler work in Linux over the past year or two, Linux desktop interactivity is really excellent. However, there is always a compromise between interactivity and overall throughput. BSDs schedulers tend to favor the latter, for better or worse.

NetBSD feels very light and fast, but then it doesn't do much out of the box. If you ever try Tiny Core Linux, check what it can do out of the box, that's about 18 times as much as what NetBSD can do.

This comparison makes no sense (to me) whatsoever. Stock NetBSD provides the loose equivalent of "Debian base+Xorg+gcc", and you can install whatever packages (binary or source) you want after that. Full versions of everything. Like Debian, that's not really comparable at all to Tiny Core Linux -- while it's a VERY cool project, it provides no compiler, very minimal command line packages, only a fraction of Xorg, a minimal GUI, and is i386/amd64 only. Under (Net)BSD, I'm just a 'pkg_add' away from having, say, Firefox, XFCE, etc. installed -- and with few exceptions, the process is *exactly* the same on Alpha, Sparc(64), Intel/AMD64, or ARM. Different targets.

Comment Re:What ever do you mean... (Score 4, Informative) 475

While the average thermal velocity is lower than the escape velocity, the high velocity tail of the Maxwell-Boltzmann distribution is what's significant on long time scales.

It's important to state that room temperature isn't the most important number here. As you pointed out, the equilibrium point is high up in the atmosphere, where the gas is very dilute and can heat to a thousand degrees or more (solar UV heating and some contribution from solar wind). When you plug that temperature into the M-B thermal distribution, the fraction of atoms exceeding the escape velocity of Earth is much larger! In absolute terms, it's still a small number but enough to leak the helium out of the atmosphere over many millions of years.

Ultimately, it is the high thermal velocity that causes the loss of helium.

Comment Re:massive miscalculation (Score 1) 174

while i don't launch balloons - if that is the way you wanted to do it.. would it not make it easier and safer to secure it to a flatbed truck and drive it under the balloon then release then having a crane hold it??

The "crane" is needed to hold the payload still until the balloon ascends to pull the flight train and the gondola payload vertical. The tension in the flight train at balloon release pulls the payload horizontally, fairly hard. The flight train is typically 1000 feet long! While you could secure the payload to a truck, gondolas aren't generally designed to handle transverse loads at the load point. You really don't want them to, either; there's often (comparatively delicate) momentum transfer units at the load point that allow accurate pointing of telescopes once at float altitude (~125,000 feet, or ~35 km). And once you build a structure to take the pressure off the gondola load point, you're generally back to a crane design again.

You can see pictures and movies of our experiment's launch last year from Fort Sumner, NM. The StratoCat site has some additional details about this flight and many others, including ours.

Catastrophic launches like this are really rare -- the CSBF team really do a fantastic job. It's really had to tell exactly what happened here, though fairly high winds were a complicating factor. It's very lucky for everyone involved that no one got hurt.

Condolences to the science team, and best wishes that they can pick up the pieces and fly again...

Comment Re:Yet another Australian advertisement on Slashdo (Score 1) 231

I think you miss the point. Even a small 0.5-meter telescope in space is a SMEX-class NASA mission and costs well over 100 million bucks. If you can do some of the same science with a *comparably small* ground-based telescope, you win. By a lot.

Similarly, your 5-meter (or larger) telescope on the ground would be competing with SOFIA and Herschel and JWST for many applications. Those are all billion dollar class projects.

If you really want to compare a 10-meter telescope on the ground to a half-meter telescope in space, feel free... the costs start to get pretty similar. But the comparison in terms of scientific capability is not usually valid.

And by *usually*, I mean that there are some capabilities that can only be done in space. Ground facilities will never compete in those genres. But when you *can* do something from the ground, by all means you should do so.

BTW, these folks would be bemused by your comment that a half-meter telescope would be "uselessly small".

Comment Re:Yet another Australian advertisement on Slashdo (Score 1) 231

How about some balance?

- The cost of doing almost anything, anywhere in Antarctica is not far short of a space mission.

Nonsense. Sure, it's more expensive than putting a telescope on Kitt Peak, Mauna Kea, or Chile. But you're still orders of magnitude away from a space mission. A half-meter telescope on a "small explorer" (SMEX) NASA mission is over 105 million dollars, and that doesn't include the launch costs. Getting that 250 kg into space costs on the order of $20,000 USD per kg, still a fairly conservative estimate.

Based on the overland traverses that the Italians and French undertake to Dome C per year, getting to a site like Ridge A would be more like $10/kg (naturally assuming that you're making good use of the traverse and taking lots of stuff up there in one go).

So the costs aren't even in the same ball park.

- It's "daytime" for at least half the year.

And infrared and submillimeter astronomers can observe during the day. Incidentally, most of the big outstanding questions about the assembly of galaxies and star formation will be solved at these wavelengths -- which is where the Antarctic atmosphere is most advantageous.

- You can see barely half the sky - probably less.

You get the Southern sky only, true. But most of the Milky Way is in the South, and you can observe it without interruption -- 24/7. Time domain astronomy is something we've only scratched the surface of -- and there are major new projects devoted to it such as LSST. Antarctica could play a significant role here.

All things considered, Hawaii and Chile are far superior in most respects which matter.

As long as you ignore the poorer image quality, unstable atmosphere with large diurnal variations, comparatively soggy atmospheric water content, 100x higher infrared background -- yeah, Chile and Hawaii are far better. :)

Slashdot Top Deals

That's the thing about people who think they hate computers. What they really hate is lousy programmers. - Larry Niven and Jerry Pournelle in "Oath of Fealty"

Working...