HOWTO: Install LMDE on a system with a Fake RAID (dmraid)

Archived topics about LMDE 1 and LMDE 2
Locked
AkiraYB

HOWTO: Install LMDE on a system with a Fake RAID (dmraid)

Post by AkiraYB »

Hello dear members of the Linux Mint Community, :)

Recently I decided to install LMDE on my machine. I knew that this Fake RAID controller was going to cause me some trouble, so I was kind of prepared. lol

In my first attempt I tried to use the installer right away and as I was expecting it didn't worked! :) The live-installer only detected the two HDs but not the RAID. That's ok, so I tried to go on and setup the RAID by hand... All went well: installing dmraid, loading the modules and finally activating the RAID through dmraid. Then I went to execute the live-installer again hoping it would detect the mapper so I could choose it. For my surprise: no, it won't detect. :( So I went to the official IRC channel #linuxmint-debian@irc.spotchat.org where Ikey told me that the live-installer wasn't capable of working with RAIDs yet. The solution: manually installing! Ikey was very kind to help me through all the process! Thank you! I can anticipate that it was a SUCCESS, so he asked me to post my experience in the forums and here is a quote from him:
Due to the help of AkiraYB on IRC I am now investigating the addition of RAID support to LMDE's live-installer, as we have now deemed it within the realm of our control and easy enough to implement. I will do this within the shortest time-frame possible for the hopeful inclusion in the next live-installer. Many thanks to AkiraYB for going through the process on IRC and making this possible.
Very nice! =D

I used Linux Mint Debian (201012/201101) for this tutorial. As you can see, the procedure can easily be adapted to any other system, regardless it's a system with or without a RAID controller. But here we're addressing systems with a Fake RAID.

So, here we go to the real deal! Enjoy!

First Things First

As we're going to work with the console and writing some characters like '/' (slash) and setting passwords, it's important to have a keyboard working properly. I mean, with the right layout for it... So, go ahead and configure yours in the Control Panel! :D

For now on, the terminal will be our best friend! So it's important that you have some familiarity with it. Also, although I have done the procedure at least two times by now, as I'm writing by logs, memory and trying to make some improvements to it, it's subject to errors.

The majority of the time we'll do our work as root, so it's important to note that when the shell command comes after a '#' then the command is executed as root, when it comes after a '$' then it is executed by a normal user.

I recommend opening a terminal and issuing a:

Code: Select all

$ sudo su
to become root. We'll use it very much, but as always: BE CAREFUL!

Activate your Fake RAID

First we have to install dmraid. Simple enough:

Code: Select all

# apt-get update
# apt-get install dmraid
Next, load the correct modules for your RAID. In my case I'm using RAID 1 (mirror), so I have to load both dm_mod AND dm_mirror. I guess RAID 0 just needs dm_mod. Other types of RAID may have other special modules that has to be loaded. As an example:

Code: Select all

# modprobe dm_mod
# modprobe dm_mirror
And then, activate it:

Code: Select all

# dmraid -ay
RAID set "pdc_diif" was activated
RAID set "pdc_diif1" was activated
RAID set "pdc_diif2" was activated
RAID set "pdc_diif3" was activated
RAID set "pdc_diif4" was activated
The output can vary, depending on your RAID controller and the number of partitions on it.

Now we're ready to go!

Create Partitions

Time for some partitioning... I recommend using GParted for the task. Just make sure to select the mapper (/dev/mapper/pdc_diif in my case) to your RAID if it isn't already selected. Revise your modifications before committing! Hopefully GParted will create the filesystems (format) the partitions you created. You did your backup, right?! :)

Here's my configuration, just for reference:

Code: Select all

/dev/mapper/pdc_diif1 100MB ext2 /boot
/dev/mapper/pdc_diif2  30GB ext4 /
/dev/mapper/pdc_diif3  80GB ext4 /home
/dev/mapper/pdc_diif4   2GB swap swap
You don't need to tell GParted the mount points (/boot, /, /home, swap) for each partition, I put it here just to let you know.

Installation

Now we have to create the folders and then mount the partitions on the right location (remember that I'm using my configuration as an example, you have to adapt it to yours):

Code: Select all

# mkdir /mnt/target
# mount /dev/mapper/pdc_diif2 /mnt/target
# mkdir /mnt/target/boot
# mount /dev/mapper/pdc_diif1 /mnt/target/boot
# mkdir /mnt/target/home
# mount /dev/mapper/pdc_diif3 /mnt/target/home
Mount the base filesystem that we have to copy to our new system:

Code: Select all

# mkdir /mnt/source
# mount -o loop -t squashfs /live/image/casper/filesystem.squashfs /mnt/source
And then, copy it to our system (remember, you MUST put the / at the end of each folder or rsync won't work the way we want):

Code: Select all

# rsync -avz /mnt/source/ /mnt/target/
This procedure can take some time to finish... After all, we're copying the base system (installation).

Change to our New System (chroot)

"Enter" our new system so we can make the initial configuration:

Code: Select all

# umount /mnt/source
# mount --bind /dev      /mnt/target/dev
# mount --bind /dev/pts  /mnt/target/dev/pts
# mount --bind /dev/shm  /mnt/target/dev/shm
# mount --bind /proc     /mnt/target/proc
# mount --bind /sys      /mnt/target/sys
# mount --bind /tmp      /mnt/target/tmp
# cp -f /etc/resolv.conf /mnt/target/etc/resolv.conf
# chroot /mnt/target
Now that we're inside the new system root, let's configure the location, locale and keyboard for it:

Code: Select all

# dpkg-reconfigure tzdata
# dpkg-reconfigure locales
# dpkg-reconfigure keyboard-configuration
Each process will request the user to enter some info, just follow the instructions.

Clean Up the System

The base filesystem comes with a mint user used here in the live environment. In our installation we don't need it:

Code: Select all

# userdel -rf mint
The switches -rf tell the command to delete everything related to that user. Because of this, the command will issue a warning that can safely be ignored.

We need to wipe out some lines from the GDM config file that are specific for the live environment. There are some ways of doing it, I suggest using the editor nano for the task so you can see what's going on. But if you're felling lucky, you can simply use the sed command to remove what we don't need.

The SED way

Code: Select all

# sed -e /^[^#\[]*$/d -i /etc/gdm3/daemon.conf
The NANO way

Code: Select all

# nano /etc/gdm3/daemon.conf
And REMOVE these lines:

Code: Select all

TimedLoginEnable=false
AutomaticLoginEnable=true
TimedLogin=mint
AutomaticLogin=mint
TimedLoginDelay=30
CTRL+O to save, CTRL+X to exit.

With these lines GDM will try to automatic login with the mint user that we don't have anymore, and it'll break... :)

Now we have to remove every package related to the live environment:

Code: Select all

# apt-get remove --purge live-initramfs live-config live-config-sysvinit live-installer live-installer-slideshow
It can take a while to remove as it hopefully will recreate initramfs.

Manage Users

Next, we set the root password:

Code: Select all

# passwd
Remember that nothing is printed to the screen while typing the password.

We need a user, right?! :)

Code: Select all

# adduser <username>
Where <username> is a login name of your choice.

Add this user to the sudo group so we can sudo:

Code: Select all

# usermod -a -G sudo <username>
You can add you to other groups as needed when we're done.

File Systems Table (/etc/fstab)

It's time to tell the system how it's organized by editing the file /etc/fstab:

Code: Select all

# nano /etc/fstab
Again, I'm attaching my configuration for reference (edit yours appropriately):

Code: Select all

# /etc/fstab: static file system information.
#
# Use 'vol_id --uuid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point>   <type>  <options>       <dump>  <pass>
proc            /proc           proc    defaults        0       0

/dev/mapper/pdc_diif1   /boot   ext2    defaults        0       2
/dev/mapper/pdc_diif2   /       ext4    defaults        0       1
/dev/mapper/pdc_diif3   /home   ext4    defaults        0       2
/dev/mapper/pdc_diif4   swap    swap    defaults        0       0

/dev/scd0       /media/cdrom0   udf,iso9660 user,noauto 0       0
Set the Hostname

Code: Select all

# echo <hostname> > /etc/hostname
# sed -e 's/mint/<hostname>/' -i /etc/hosts
Of course, substitute <hostname> with the name you want to use for your computer.

Don't Forget dmraid, the Culprit

Code: Select all

# apt-get update
# apt-get install dmraid
It'll build a new initramfs and can take some time.

Finally, GRUB

Make sure you select /dev/dm-0 (/dev/mapper/pdc_diif in my case) to install it to the MBR!

Code: Select all

# dpkg-reconfigure grub-pc
I think you'll have to mess up with the config files for GRUB if you want to dual boot with other OSs (I don't know if it can autodetect). But this is another topic. :)

Exit chroot, Umount, Reboot!

Code: Select all

# exit
# umount /mnt/target/tmp
# umount /mnt/target/sys
# umount /mnt/target/proc
# umount /mnt/target/dev/shm
# umount /mnt/target/dev/pts
# umount /mnt/target/dev
# umount /mnt/target/home
# umount /mnt/target/boot
# umount /mnt/target
It never hurts to tell again that this is just an example. Remember to umount any partitions that you have mounted! Just to be safe. Finally, REBOOT! YAY! :D

THAT'S IT!

Congratulations! You have now LMDE installed on your Fake RAID! Enjoy!!! :)

It can appear to be very difficult to execute these steps seeing the length of this post, but I think I have over detailed a bit too much in order to help you understand what is going on in each step (but not too much at the same time :)). It isn't that much complicated, really. It's just a lot to do... =D

Again, many thanks to Ikey for helping me with all the process and making this guide possible!

Thanks for your attention! Cya! o/
AkiraYB.
Last edited by LockBot on Wed Dec 28, 2022 7:16 am, edited 2 times in total.
Reason: Topic automatically closed 6 months after creation. New replies are no longer allowed.
pmd

Re: HOWTO: Install LMDE on a system with a Fake RAID (dmraid

Post by pmd »

I'm just installing LMDE using this tutorial and I found something that should be corrected. When u r mounting fresh partitions u don't have directories boot and home on it, so first code in Installation section should looks like this:

Code: Select all

# mkdir /mnt/target
# mount /dev/mapper/pdc_diif2 /mnt/target
# mkdir /mnt/target/boot
# mount /dev/mapper/pdc_diif1 /mnt/target/boot
# mkdir /mnt/target/home
# mount /dev/mapper/pdc_diif3 /mnt/target/home
AkiraYB

Re: HOWTO: Install LMDE on a system with a Fake RAID (dmraid

Post by AkiraYB »

pmd wrote:I'm just installing LMDE using this tutorial and I found something that should be corrected. When u r mounting fresh partitions u don't have directories boot and home on it, so first code in Installation section should looks like this:

Code: Select all

# mkdir /mnt/target
# mount /dev/mapper/pdc_diif2 /mnt/target
# mkdir /mnt/target/boot
# mount /dev/mapper/pdc_diif1 /mnt/target/boot
# mkdir /mnt/target/home
# mount /dev/mapper/pdc_diif3 /mnt/target/home
You're totally right! I tried to make it prettier and ended up breaking it! I've corrected the original post! :)

Thank you!
donchurch

Re: HOWTO: Install LMDE on a system with a Fake RAID (dmraid

Post by donchurch »

I'm stoked! Bios raid support has been my biggest problem when trying out different distributions and it's why I stayed with with Debian for several years. Another major issue had been getting good video support for my killer 22" crt with my ati graphics cards so I ended up trying Fedora back at F-13 and was impressed with the excellent raid and open source video for my system and even had suspend working without a hitch. I do have to admit that I was starting to find Lenny's older versions of apps starting to hold me back which is part of the reason I went to F-13.

Anyway, this probably isn't the best place to ask but ever since I did an "apt-get safe-upgrade"(500+- upgraded packages) my system has constant hard drive access 90-95 percent cpu usage. It's been going on for hours and several restarts now and what's weird is when I look at the usage details in the systems monitor app there are only a there is no one application pulling a lot of cpu power. Gnome-system-monitor itself jumps to the top of the usage list every few seconds at about 20%. The second similar usage app is gvfs-gdu-volume-monitor and the rest are the normal things at 2-8% (xorg, udevd etc). I have Core-Duo and both average 95% constantly but its interesting when I did an updatedb command which initially took 10+ second, the cpu usage actually dropped to about 50% on both cpus. And like I said before, the hard drive usage light is constantly lit.

One last thing...seemingly every search result I read on grub2 and bios raid combination says "no go". LMDE has apparently nailed the solution there, nice job!

update:
Hmm...Booted up LMDE today and all seems to be well. It went of for so long I thought is was stuck in a loop of some sort but cpu and disk access has dropped to normal. Thanks anyway.
Again...Excellent Howto!
jabberwocky_

Re: HOWTO: Install LMDE on a system with a Fake RAID (dmraid

Post by jabberwocky_ »

Excellent post! Both useful and educational.

Now I can have my favorite distro running on my fakeraid setup, and I learned how linux is installed.
brf
Level 1
Level 1
Posts: 7
Joined: Sun Aug 26, 2007 10:09 pm

Re: HOWTO: Install LMDE on a system with a Fake RAID (dmraid

Post by brf »

I have successfully installed Mint-Debian 2011-09 on a RAID-1
system using the process described in this how-to. It went
quite smoothly, except for the final step of getting GRUB
installed. That took several tries but was ultimately successful.
Altogether it was an educational experience.

But I think I may have found a simpler way. Here is a broad
outline of the technique, the details of the steps are the same
as in the manual install how-to.

1) Scrounge up a spare hard drive. I used a SATA drive from a
notebook computer. It doesn't have to be fast or large or new.
It doesn't even have to be "installed" in your computer, you
can leave it hanging out the side of the box by the wires.

2) Boot from the live CD or a USB stick. The temporary drive
should be visible. If you have two main disks in a RAID-1
configuration, for example, they will show up as /dev/sda and
/dev/sdb, the new disk will be /dev/sdc.

3) Install the system to /dev/sdc. I am assuming you have the
partitioning of that drive set up appropriately. Let the
installer do its magic with regard to the live user, unneeded
packages, etc. This should be more robust than trying to track
the many likely changes in the install process from one release
to the next to keep the manual install how-to up to date.

4) Install dmraid in BOTH the live and installed systems.

5) Use the live system to copy the installed system to your
RAID set into the partitions and file systems you have
prepared there. You can now unmount and remove the temporary
disk.

6) Jigger around with /etc/fstab on the RAID set to reflect
the organization of the target system.

7) Install GRUB and reboot.

It would be nice if the installer could find the RAID sets and
make this how-to unnecessary. I would think the users who have
RAID disks are also the more adventurous ones who would choose
Mint-Debian over the regular releases. From my experience with
the manual install, there isn't that much different between the
RAID and non-RAID, once dmraid is installed. Perhaps this can
be put on the wish-list for the next Mint-Debian .iso update...
lesebas

Re: HOWTO: Install LMDE on a system with a Fake RAID (dmraid

Post by lesebas »

Hello,

Just to inform you that this Howto works also for software raid, just replace dmraid by mdadm and enjoy... Many thanks for this very usefull post! :wink:
bincue700us

Re: HOWTO: Install LMDE on a system with a Fake RAID (dmraid

Post by bincue700us »

It'll build a new initramfs and can take some time.

Finally, GRUB

Make sure you select /dev/dm-0 (/dev/mapper/pdc_diif in my case) to install it to the MBR!
It builds however hangs at
ldconfig: /lib/libuuid.so.1 is not a symbolic link



Nothing but patience cured it.

For some reason the third step in Configuring grub-pc is missing.. It does not refer me to where install GRUB, unable to even select anything because it goes back to cli after "quietam".

Not sure what to do at this moment, so far everything looks good, I edited the /mnt/target/grub/device.map setting changing hd4 to (hd0) /dev/mapper/pdc_jfdffbed. Like I did in Red Hat, but there was a location for GRUB...not here, not this time.

Where GRUB has installed? I have no idea.
GRUB2 or grub-pc is very new to me infact... today. so where it is installed and how to edit whatever ".cfg" file it is I have yet to dig further.

Code: Select all

mint / # grub install --force /dev/mapper/pdc_jfdffbed
The program 'grub' is currently not installed.  You can install it by typing:
apt-get install grub-legacy
grub: command not found
The point of having GRUB2 (grub-pc) over GRUB(1) grub-legacy is still not clear to me, most everything seemed to be fine with the GRUB in the past atleast I didn't have another hurdle. Trying to figure out where it is installed because
# file -s /dev/mapper/pdc_jfdffbed

Code: Select all

/dev/mapper/pdc_jfdffbed: x86 boot sector; partition 1: ID=0x83, starthead 32, startsector 2048, 393216 sectors; partition 2: ID=0x83, starthead 154, startsector 395264, 61440000 sectors; partition 3: ID=0x5, starthead 254, startsector 61835264, 1891448832 sectors, code offset 0xb8
and dpkg-reconfigure grub-pc sends me into another loop without location to install GRUB options "step 3"

what next?

Code: Select all

# apt-get install grub2 os-prober
Reading package lists... Done
Building dependency tree       
Reading state information... Done
os-prober is already the newest version.
The following extra packages will be installed:
  grub-common grub-pc grub-pc-bin grub2-common
Suggested packages:
  multiboot-doc grub-emu xorriso desktop-base
The following NEW packages will be installed:
  grub2
The following packages will be upgraded:
  grub-common grub-pc grub-pc-bin grub2-common
4 upgraded, 1 newly installed, 0 to remove and 427 not upgraded.
Need to get 3,456 kB of archives.
After this operation, 49.2 kB of additional disk space will be used.
Do you want to continue [Y/n]? y
Get:1 http://debian.linuxmint.com/latest/ testing/main grub-pc i386 1.99-11 [92.1 kB]
Get:2 http://debian.linuxmint.com/latest/ testing/main grub-pc-bin i386 1.99-11 [796 kB]
Get:3 http://debian.linuxmint.com/latest/ testing/main grub2-common i386 1.99-11 [94.0 kB]
Get:4 http://debian.linuxmint.com/latest/ testing/main grub-common i386 1.99-11 [2,472 kB]
Get:5 http://debian.linuxmint.com/latest/ testing/main grub2 i386 1.99-11 [2,472 B]
Fetched 3,456 kB in 6s (495 kB/s)                                              
Reading changelogs... Done
Preconfiguring packages ...
(Reading database ... 131272 files and directories currently installed.)
Preparing to replace grub-pc 1.99-8 (using .../grub-pc_1.99-11_i386.deb) ...
Unpacking replacement grub-pc ...
Preparing to replace grub-pc-bin 1.99-8 (using .../grub-pc-bin_1.99-11_i386.deb) ...
Unpacking replacement grub-pc-bin ...
Preparing to replace grub2-common 1.99-8 (using .../grub2-common_1.99-11_i386.deb) ...
Unpacking replacement grub2-common ...
Preparing to replace grub-common 1.99-8 (using .../grub-common_1.99-11_i386.deb) ...
Unpacking replacement grub-common ...
Selecting previously deselected package grub2.
Unpacking grub2 (from .../grub2_1.99-11_i386.deb) ...
Processing triggers for man-db ...
Processing triggers for install-info ...
Setting up grub-common (1.99-11) ...
Installing new version of config file /etc/grub.d/20_linux_xen ...
Setting up grub2-common (1.99-11) ...
Setting up grub-pc-bin (1.99-11) ...
Setting up grub-pc (1.99-11) ...
Setting up grub2 (1.99-11) ...
Famine

Re: HOWTO: Install LMDE on a system with a Fake RAID (dmraid

Post by Famine »

I've used this guide successfully in the past, but it seems to be falling down on the latest builds. Anyone more knowledgeable than me done a dmraid install recently?
SkipG

Updated for LMDE 201303 (UP6)

Post by SkipG »

Famine wrote:I've used this guide successfully in the past, but it seems to be falling down on the latest builds. Anyone more knowledgeable than me done a dmraid install recently?
Well, I just used this note to install LMDE Cinnamon x86_64 201303 in a VMware Fusion machine (without dmraid) and initially it didn't work for me either. The networking has changed since the article was written, since Debian is now supporting both IPV4 and IPV6. I don't have a step-by-step howto, but the content of the /etc/hosts file has changed significantly, and the location of the /etc/resolve.conf file has changed.

I think what I did was this:

The rsync step goes much, much faster if you add the "q" switch, but works fine as written.

After the rsync step, but before the chroot step, edit the /etc/hosts file in the live machine. Replace the word "mint" in the line "127.0.0.1 localhost mint" with the name of your new machine, and save the change to /mnt/target/etc/hosts.

The new final resting place of the resolv.conf file is /run/resolvconf/resolve.conf, so change the "cp -f /etc/resolve.conf ..." step to "cp /run/resolvconf/resolve.conf ..."

You can, if necessary, reboot off the iso, type "e" before the boot takes place, hit <tab> to edit the boot command, and change "splash" to "text". Then you can mount your filesystems and make any additional necessary changes.

--skip
Locked

Return to “LMDE Archive”