Thursday, 9 October 2014

Memory management unit(MMU)

Memory Management

This is quite possibly the most important part of any Kernel. And rightfully so--all programs and data require it. As you know, in the Kernel, because we are still in Supervisor Mode (Ring 0), We have direct access to every byte in memory. This is very powerful, but also produces problems, espically in a multitasking evirement, where multiple programs and data require memory.
One of the primary problems we have to solve is: What do we do when we run out of memory?
Another problem is fragmentationIt is not always possible to load a file or program into a sequencal area of memory. For an example, lets say we have 2 programs loaded. One at 0x0, the other at 0x900. Both of these programs requested to load files, so we load the data files:





Notice what is happening here. There is alot of unused memory between all of these programs and files. Okay...What happens if we add a bigger file that is unable to fit in the above? This is when big problems arise with the current scheme. We cannot directly manipulate memory in any specific way, as it will currupt the currently executing programs and loaded files.
Then there is the problems of where each program is loaded at. Each program will be required to be Position Indipendent or provide relocation Tables. Without this, we will not know what base address the program is supposed to be loaded at.
Lets look at these deeper. Remember the ORG directive? This directive sets the location where your program is expected to load from. By loading the program at a different location, the program will refrence incorrect addresses, and will crash. We can easily test this theory. Right now, Stage2 expects to be loaded at 0x500. However, if we load it at 0x400 within Stage1 (While keeping the ORG 0x500 within Stage2), a triple fault will accure.
This adds on two new problems. How do we know where to load a program at? Coinsidering all we have is a binary image, we cannot know. However, if we make it standard that all programs begin at the same address--lets say, 0x0, then we can know. This would work--but is impossible to impliment if we plan to support multitasking. However, if we give each program there own memory space, that virtually begins at 0x0, this will work. After all, from each programs' perspective, they are all loaded at the same base address--even if they are different in the real (physical) memory.
What we need is some way to abstract the physical memory. Lets look closer.




Virtual Address Space (VAS)

Virtual Address Space is a Program's Address Space. One needs to take note that this does not have to do with System Memory. The idea is so that each program has their own independent address space. This insures one program cannot access another program, because they are using a different address space.
Because VAS is Virtual and not directly used with the physical memory, it allows the use of other sources, such as disk drives, as if it was memory. That is, It allows us to use more memory then what is physically installed in the system.
This fixes the "Not enough memory" problem.
Also, as each program uses its own VAS, we can have each program always begin at base 0x0000:0000. This solves the relocation problems discussed ealier, as well as memory fragmentation--as we no longer need to worry about allocating continous physical blocks of memory for each program.
Virtual Addresses are mapped by the Kernel trough the MMU. More on this a little later.

Virtual Memory: Abstract

Virtual Memory is a special Memory Addressing Scheme implimented by both the hardware and software. It allows non contigous memory to act as if it was contigius memory.
Virtual Memory is based off the Virtual Address Space concepts. It provides every program its own Virtual Address Space, allowing memory protection, and decreasing memory fragmentation.
Virtual Memory also provides a way to indirectly use more memory then we actually have within the system. One common way of approching this is by using Page files, stored on a hard drive.
Virtual Memory needs to be mapped through a hardware device controller in order to work, as it is handled at the hardware level. This is normally done through the MMU, which we will look at later.
For an example of seeing virtual memory in use, lets look at it in action:
Notice what is going on here. Each memory block within the Virtual Addresses are linear. Each Memory Block is mapped to either it's location within the real physical RAM, or another device, such as a hard disk. The blocks are swapped between these devices as an as needed bases. This might seem slow, but it is very fast thanks to the MMU.
Remember: Each program will have its own Virtual Address Space--shown above. Because each address space is linear, and begins from 0x0000:00000, this immiedately fixes alot of the problems relating to memory fragmentation and program relocation issues.
Also, because Virtual Memory uses different devices in using memory blocks, it can easily manage more then the amount of memory within the system. i.e., If there is no more system memory, we can allocate blocks on the hard drive instead. If we run out of memory, we can either increase this page file on an as needed bases, or display a warning/error message,
Each memory "Block" is known as a Page, which is useually 4096 bytes in size.
Once again, we will cover everything in much detail later.

Memory Management Unit (MMU): Abstract


The MMU, Also known as Paged Memory Management Unit (PMMU) is a component inside the microprocessor responsible for the management of the memory requested by the CPU. It has a number of responsibilities, includingTranslating Virtual Addresses to Physical Addresses, Memory Protection, Cache Control, and more.

Segmentation: Abstract

Segmentation is a method of Memory Protection. In Segmentation, we only allocate a certain address space from the currently running program. This is done through the hardware registers.
Segmentation is one of the most widly used memory protection scheme. On the x86, it is useually handled by the segment registers: CS, SS, DS, and ES.
We have seen the use of this through Real Mode.

Paging: Abstract

THIS will be important to us. Paging is the process of managing program access to the virtual memory pages that are not in RAM. We will cover this alot more later.

Linux basics kernel and bootloader

What is a bootloader

Bootloader is a piece of code that runs before any operating system is running.
Bootloader are used to boot other operating systems, usually each operating system has a set of bootloaders specific for it.
Bootloaders usually contain several ways to boot the OS kernel and also contain commands for debugging and/or modifying the kernel environment.
In this talk we will concentrate on Linux bootloaders.

Since it is usually the first software to run after powerup or reset, it is highly processor and board specific.

Processor and Board specific


  • Bootloader starts before any other software starts therefore it is highly processor specific and board specific.
  • The bootloader performs the necessary initializations to prepare the system for the Operating system.
  • The operating System is more general and usually contains minimum board or processor specific code.

Usually starts from ROM (Flash)

When a CPU starts, it has certain preset values in its registers, it usually knows nothing about on board memory. It expetcs to find program code at a specific address, this address usually points to ROM or Flash, this is the beginning of bootloader code. First task of boot loader is usually map the RAM to predefined addresses. After RAM is mapped Stack pointer is setup.
This is the minimal setup required, after that the bootloader starts it's work.


Moves itself to RAM for actual work


We now have RAM and Stack pointer and ready to do the real work. First of all since Flash memory is usually scarce resource and much slower then RAM, we move the actual bootloader code to RAM for actual execution. In many cases Flash memory is located in an address space that is not executable, for example serial flash that you can access by reading repeatativly from one address. Also in many cases, the actual bootloader code is compressed so it must first be uncompressed and then written to RAM.


Minimum peripheral initialization

The bootloader needs to initialize peripherals needed for its operation, usually we need the screen the keyboard and optionaly the mouse. In embedded systems we need a terminal. Sometimes when we boot from network we need network connection so we have to initialize ethernet card.

These initializations are usually just for the bootloader.
The Operating System usually overrides them or not using them at all.


Decide which OS image to start


Bootloaders can usually load one of few kernel images that are known to the bootloader This can be done in embedded systems to make sure we can upgrade a kernel without fear of power loss during upgrade, since we always have a backup kernel that will load. In PC environments this is done to let the user choose among several OS's or give the user a chance to try another kernel without losing his/her current kernel.


Get the kernel

Bootloaders must load the kernel image from a known location
most bootloaders provide several ways to loade the kernel:

  • Load kernel image from memory
  • Load kernel image from file (If the system has a disk)
  • Load kernel image from network using bootp
  • Load kernel image to memory using TFTP

Allow manual intervention

In some cases we need manual intervention to specify parameters for the OS,
bootloaders usually have a way to interrupt automatic loading and insert parameters. This is done by waiting a certain default time before autoloading, during this time the user has to press a specific key (Esc in case of LILO) or any key in case of UBOOT. When a key is pressed, a prompt is displayed and the user has a chance to change certain environment variable or maybe load a different kernel from memory, disk, network etc.


PC boot sequence

PC's have a different boot sequence, first a ROM BIOS starts that give minimal configuration to peripherals, usually just configuring the screen and keyboard. It then looks in its tables (CMOS RAM) for a boot device.
From the specified boot device it loads to RAM the first stage (from MBR)
control is transferred to first stage.

First stage bootloader loads the second stage that usually displays the boot menu. The user selects the OS to boot or a default OS is booted after a speficied time. In this stage the user have the option to add boot parameters to kernel. In several cases, the bootloader does not know how to load the OS and transfers the control to another bootloader, this is called chain loading, this is usually done when LILO or GRUB needs to load windows.


Prepare parameters for OS loading


The OS is usually generic and is not specificly tied to one board implementation, so in order for the OS to work as expected, it needs some information about the memory map, clocks etc. These parameters can be compiled into the kernel but this make the kernel very board specific and makes it difficult to replace kernels in the future. Most kernels allow passing information by means of command line and/or in memory structure.


Loads OS Image

After the bootloadr knows what OS Image (Kernel) to load, it loads the Image to RAM This usually must be done to a specific address the specific address is either hard coded into the bootloader or is read from the kernel image file.
The bootloader needs to be able to read and understand the object format of the OS kernel. It will extract the actual memory image from the object file and place it to memory according to the header of the object format and then jump to the entry point (also given in the object header).

In case of Linux which uses MMU the base address is usually virtual address and the bootloader needs to translate it to physical address, this is why we usually can't load linux using a bootloader written for other OS's that don't use MMU such as vxWorks.

Load optional RAM file system (initrd)


In Linux the kernel needs a root file system to work, this root file system can be on a disk or in RAM disk. RAM disk (initrd) is frequently used in linux distributions to keep the kernel small and to load only a required set of modules to work with the hardware. The initial RAM disk is used as a root file system, the init process of the RAM disk, loads the needed modules and then mounts the other filesystems from disk. In embedded system, many times there is no disk at all so the RAM file system is used.

Transfer control to OS kernel in RAM


After all images are loaded into RAM, we can start the OS itself. Starting the OS usually means copying it to a certain location in RAM, filling a memory structure and then transfer control to the OS code. In this stage the bootloader role is done and thus the OS can use the RAM area where the bootloader is.


Starting Linux

Until now we concentrated on the bootloader itself and what we discussed in general for all operating system, now we will discuss specific features needed to boot linux. Linux was created as an operating system for desktop or server, it was later adopted for embedded system environments and is thus different from other Embedded operating systems.

Linux must have a file system, this makes programming an embedded linux application easier as it is similar to programming a desktop application, you simply compile your program to a file linked with the appropriate libraries and then run this file. In other embedded operating system you link your application with the kernel itself and thus you actually write a kernel application. However since Linux must have a file system and embedded systems usually does not even have a disk, we use a RAM disk image of the file system. This RAM disk image can be included in the Kernel image as another section in the object file or used as a seperate file. The bootloader must know how to handle it. It must extract the file system image and put it in a certain known RAM location before it trasfers control to the kernel.

Wednesday, 10 September 2014

BEAGLEBONE BLACK

configuration of gpio pins of beaglebone black.

device tree source file.

home automation project              

beaglebone black programs
gsm  program,
sensor program,
lcd program,
related with some interfacing devices.  





What is Raspberry Pi?

     

     Originally meant for teaching kids how to program
$35 Desktop ComputerConnect to InternetPlay HD VideosUse as an Emulator for Classic Video GamesInexpensive Powerful Web ServerExperiment with Electronics

IOCTL on Device Drivers

ioctl();
Input/Output Control (ioctl, in short) is a common operation, or system call, available in most driver categories.basically, anything to do with device input/output, or device-specific operations, yet versatile enough for any kind of operation


The question is: how can all this be achieved by a single function prototype? The trick lies in using its two key parameters: command and argument. The command is a number representing an operation. The argument command is the corresponding parameter for the operation. The function ioctl(); implementation does a switch … case over the commmand to implement the corresponding functionality. The following has been its prototype in the Linux kernel for quite some time:
int ioctl(struct inode *i, struct file *f, unsigned int cmd, unsigned long arg);



user-space using the ioctl()system call, prototyped in <sys/ioctl.h> as:
#include <sys/ioctl.h>
int ioctl(int d, int request, ...);  

DESCRIPTION

The ioctl function manipulates the underlying device parameters of special files. In particular, many operating characteristics of character special files (e.g. terminals) may be controlled with ioctl requests. The argument d must be an open file descriptor.
The second argument is a device-dependent request code. The third argument is an untyped pointer to memory. It's traditionally char *argp (from the days before void * was valid C), and will be so named for this discussion.
An ioctl request has encoded in it whether the argument is an in parameter or out parameter, and the size of the argument argpin bytes. Macros and defines used in specifying an ioctl request are located in the file <sys/ioctl.h> 

RETURN VALUE

Usually, on success zero is returned. A few ioctls use the return value as an output parameter and return a nonnegative value on success. On error, -1 is returned, and errno is set appropriately.  

ERRORS

EBADF
d is not a valid descriptor.
EFAULT
argp references an inaccessible memory area.
ENOTTY
d is not associated with a character special device.
ENOTTY
The specified request does not apply to the kind of object that the descriptor d references.
EINVAL
Request or argp is not valid.


Tuesday, 26 August 2014

LINUX BASIC COMMANDS WITH EASY EXAMPLES

cd
Command


The cd (change directory) command is the basic navigation tool for moving your current
location to different parts of the Linux file system. You can move directly to a directory by
typing the command, followed by a pathname or directory. For example, the following
command will move you to the /usr/bin directory:
# cd /usr/bin
When you’re in that directory, you can move up to the /usr directory with the following
command:

# cd ..

You could also move to the root directory, or /, while in the /usr/bin directory by using the
following command:

# cd ../..

Finally, you can always go back to your home directory (where your files are) by using either
of the following commands:

# cd
or
# cd ~


The pwd (print working directory)

command tells you where you are, and prints the working
(current) directory. For example, if you execute
# cd /usr/bin
and then type

# pwd
4
Reading and Navigation Commands
47
you’ll see
/usr/bin

Although there’s a man page for the pwd command, chances are that when you use pwd, you’re
using a pwd built into your shell. How do you tell? If you try calling pwd with the following
command, you should see only the current working directory printed:

# pwd --help

Instead, try calling pwd with

# /bin/pwd --help

You’ll see a short help file for the pwd command, and not the current directory.

find Command

The find command is a powerful searching utility you can use to find files on your hard drive.
You can search your hard drive for files easily with a simple find command line. For example,
to search for the spell command under the /usr directory, you would use

# find /usr -name spell -print

You can also use the find command to find files by date; you can also specify a range of times.
For example, to find programs in the /usr/bin directory that you have not used in the last
one hundred days, you can try:

# find /usr/bin -type f -atime +100 -print

To find any files, either new or modified, that are one or fewer days old in the /usr/bin
directory, you can use

# find /usr/bin -type f -mtime -1 -print

whereis Command


The whereis command can quickly find files, and it also shows you where the file’s binary,
source, and manual pages reside. For example, the following command shows that the find
command is in the /usr/bin directory, and its man page is in the /usr/man/man1 directory:

# whereis find

find: /usr/bin/find /usr/man/man1/find.1

You can also use whereis to find only the binary version of the program with

# whereis -b find

find: /usr/bin/find

If whereis cannot find your request, you’ll get an empty return string, for example:

# whereis foo

foo:
Part of the problem could also be that the file is not in any of the directories the whereis
command searches. The directories whereis looks in are hard-coded into the program.
Although this may seem like a drawback, limiting searches to known directories such as /usr/
man, /usr/bin, or /usr/sbin can speed up the task of finding files.

locate Command

One way to speed up searches for files is not to search your file directories! You can do this
by using a program like locate, which uses a single database of filenames and locations, and
which saves time by looking in a single file instead of traversing your hard drive. Finding a
file using locate is much faster than the find command because locate will go directly to the
database file, find any matching filenames, and print its results.
Locate is easy to use. For example, to find all the PostScript files on your system, you can enter
# locate *.ps
Almost instantly, the files are printed to your screen.


whatis command 

may be able to help you quickly find out what a program is with a
summary line derived from the program’s manual page. For example, to find out what is
whereis (not whereis whatis!), you can enter

# whatis whereis

whereis (1)- locate the binary, source, and manual page files for a command


apropos command

what if you want to do something and can’t remember which program does
what? In this case, you can turn to the apropos command.
For example, if you can’t remember which command searches for files, you can enter

# apropos search

apropos (1)
- search the whatis database for strings
badblocks (8)
- search a device for bad blocks
bsearch (3)
- binary search of a sorted array.
conflict (8)
- search for alias/password conflicts
find (1)
- search for files in a directory hierarchy


ls Command

The ls (list directory) command will quickly become one of your most often used programs.
In its simplest form, ls lists nearly all of the files in the current directory. But this command,
which has such a short name, probably has more command-line options (more than 75 at last
count) than any other program!

In the simple form, ls lists your files:

# ls

News
author.msg
auto
axhome
documents
mail
nsmail
reading
research
search
vultures.msg

You can also list the files as a single line, with comma separations, with the -m option:

# ls -m

News, author.msg, auto, axhome, documents, mail, nsmail, reading,
 ̄ research, search, vultures.msg

If you don’t like this type of listing, you can have your files sorted horizontally, instead of
vertically (the default), with the -x option:

# ls -x

News
author.msg
auto
axhome

But are all these just files, or are there several directories? One way to find out is to use the
-F option:

# ls -F

News/
author.msg
auto/
axhome/
documents/
mail/
nsmail/
reading/
research/
search*
vultures.msg

As you can see, the -F option causes the ls command to show the directories, each with a /
character appended to the filename. The asterisk (*) shows that the file search is an executable
program. But are these all the files in this directory? If you want to see everything, you can
use the -a option with -F, as follows:

# ls -aF

./
../
.dt/
.dtprofile*
.neditdb
.netscape/

Long Directory Listing

Would you like even more information about your files? You can view the long format listing
by using the ls -l option, for example:

# ls -l

there are eight different columns. The first column is the file’s permissions
flags, which are covered in Hour 21, “Handling Files.” These flags generally show the file’s
type, and who can read, write (or modify or delete), or run the file. The next column shows
the number of links, which are discussed in Hour 5, “Manipulation and Searching
Commands.” Next is the owner name, followed by group name. Owners and groups are
discussed in Hour 21. The file size is listed next, followed by a timestamp of the file or
directory was created or last modified. The last column, obviously, is each file’s name.

The ls command also supports using wildcards, or regular expressions, which means you can
use options similar to (and much more complex than) the examples you’ve seen with the find
and locate commands. For example, if you only want to search for text files in the current
directory, you can use

# ls *.txt

Finally, if you want to see all of the files on your system, you can use the ls -R option, which
recursively descends directories to show you the contents. Although you can use this
approach to search for files and build a catalog of the files on your system, you should be
warned that it might take several minutes to list your files. The listing may also include files
you don’t want listed, or files on other operating system filesystems, such as DOS or
Windows, especially if you use

# ls -R /

A better approach might be to use the -d option with -R to list only a certain number of
directory levels. For example, the following command will search three directory levels along
the root or / directory:

# ls -Rd /*/*/*

However, there’s a much better utility for getting a picture of the directory structure of your
system, the tree command, which is discussed later in this hour.


The dir command works like the default ls command, listing the files in sorted columns, for
example:

# dir

News
author.msg
auto
axhome
documents
mail
nsmail
reading
research
search
vultures.msg

The vdir command works like the ls -l option, and presents a long format listing by default,
for example:

# vdir
total 15


tree Command

You now know how to list the contents of your directories, but you may also be interested
in the directory structure of your system, or the directory structure of a particular tree of your
system (such as /usr/X11R6). For example, ls -R will recursively print out your directories,
but how are these directories related to each other? If you would like a more direct, graphical
view of your directories, a directory listing utility can help.


cat Command


The cat (concatenate file) command is used to send the contents of files to your screen. This
command may also be used to send files’ contents into other files. Hour 6 covers terms such
as standard input, standard output, and redirection, and this section shows you some basic
uses for this command.

Although cat may be useful for reading short files, it is usually used to either combine, create,
overwrite, or append files. To use cat to look at a short file, you can enter

# cat test.txt

This text file was created by the cat command.
Cat could be the world’s simplest text editor.

# cat -n test.txt

1 This text file was created by the cat command.
2 Cat could be the world’s simplest text editor.

You can also use cat to look at several files at once, because cat accepts wildcards, for example:

# cat -n test*

1 This text file was created by the cat command.
2 Cat could be the world’s simplest text editor.

As you can see, cat has also included a second file in its output, and has numbered each line
of the output, not each file. Note that you could also see both files with

# cat test.txt test2.txt

The output will be exactly the same as if you’d used a wildcard. But looking at several files
is only one way to use cat. You can also use the cat command with the redirection operator
> to combine files. For example, if you would like to combine test.txt and test2.txt into
a third file called test3.txt, you can use

# cat test* > test3.txt

You can check the result with

# ls -l test*


finally, here’s a trick you can use if you want to create a short text file without running a word
processor or text editor. Because the cat command can read the standard input (more about
this in Hour 6), you can make the cat command create a file and fill it with your keystrokes.
Here’s how:

# cat > myfile.txt

Now, enter some text:

# cat > myfile.txt

This is the cat word processor.
This is the end of the file.
Then, when you’re done typing, press Ctrl+D to close the file.


more command is a traditional pager in the sense that it provides the basic features of early
pagers. You can use more on the command line with

# more longfile.txt

less Command

1.You can scroll backwards and forwards through text files using your cursor keys.

2.You can navigate through files with bookmarks, by line numbers, or by percentage
of file.

3. Sophisticated searches, pattern options, and highlighting through multiple files.

rm Command

The rm command deletes files. This command has several simple options, but should be used
cautiously. Why? Because when rm deletes a file, it is gone (you may be able to recover portions
of text files,

Always running Linux while logged in as the root operator and using the rm command has
caused many untold tales of woe and grief. Why? Because with one simple command you can
wipe out not only your Linux system, but also any mounted filesystems, including DOS
partitions, flash RAM cards, or removable hard drives, as follows:

# rm -fr /*

This command removes all files and directories recursively (with the -r option), starting at
the root or / directory. 

The rm command will delete one or several files from the command line. You can use any of
the following:

# rm file

# rm file1 file2 file2

# rm file*

One of the safer ways to use rm is through the -i or interactive option, where you’ll be asked
if you want to delete the file, for example:

# rm -i new*

rm: remove `newfile’? y

rm: remove `newfile2'? y

You can also force file deletion by using the -f option, as in

# rm -f new*

However, if rm finds a directory, even if it is empty, it will not delete the directory, and
complains, even if you use -f, as in the following:

# rm -f temp*

rm: temp: is a directory

rm: temp2: is a directory

When you combine -f and -r, the recursive option, you can delete directories and all files
or directories found within (if you own them; see Hour 21, “Handling Files”), as in the
following example:

# rm -fr temp*

The -fr option also makes rm act like the rmdir command 


mkdir Command


The mkdir command can create one or several directories with a single command line. You
may also be surprised to know that mkdir can also create a whole hierarchy of directories,
which includes parent and children, with a single command line.
This command is one of the basic tools (along with cp and mv) you’ll use to organize your
information. Now, take a look at some examples. The following simple command line creates
a single directory:

# mkdir temp

But you can also create multiple directories with

# mkdir temp2 temp3 temp4

You’d think that you could also type the following to make a directory named child under
temp:

# mkdir temp/child

And you can, because the temp directory exists (you just created it). But, suppose you type

# mkdir temp5/child

mkdir: cannot make directory `temp5/child’: No such file or directory
As you can see, mkdir complained because the temp5 directory did not exist. 


rmdir Command


The rmdir command is used to remove directories. To remove a directory, all you have to do
is type

# rmdir tempdirectory

But there’s a catch: the directory must be empty first! If you try to delete a directory with any
files, you’ll get an error message like this:

# rmdir temp5

rmdir: temp5: Directory not empty

In this example, temp5 also contains other directories. The rmdir command would also
complain if a directory contains only files and not directories. You can use the rm command
to remove the files first (remember to be careful if you use the -fr option), or you can move
the files somewhere else, or rename the directory, with the mv command, discussed next.


 mv Command

The mv command, called a rename command but known to many as a move command, will
indeed rename files or directories, but it will also move them around your file system.
Actually, in the technical sense, the files or directories are not really moved. If you insist on
knowing all the gory details, read the Linux System Administrator’s Guide, available through
http://sunsite.unc.edu/LDP/LDP/sag/index.html

In its simplest form, mv can rename files, for example:

# touch file1

# mv file1 file2

This command renames file1 to file2 . However, besides renaming files, mv can rename
directories, whether empty or not, for example:

# mkdir -p temp/temp2/temp3

# mv temp newtemp


cp Command


The cp, or copy, command is used to copy files or directories. This command has nearly 40
command-line options. They won’t all be covered here, but you’ll learn about some of the
most commonly used options, which will save you time and trouble.
You’ll most likely first use cp in its simplest form, for example:

# cp file1 file2

This creates file2, and unlike mv, leaves file1 in place. But you must be careful when using
cp, because you can copy a file onto a second file, effectively replacing it! In this regard, cp
can act just like mv . 


Hard and Symbolic Links


with the ln Command
Linux supports both hard and symbolic links. Although it is not important that you
understand how links work in Linux, you should understand the difference between these
two types of links and how to use links while you use Linux. To create hard or symbolic links,
you use the ln, or link, command.

The ln command creates both types of links. If you use the ln command to create a hard link,
you specify a second file on the command line you can use to reference the original file, for
example:

# cat > file1

This is file1

# ln file1 file2

 If you delete file1 , file2 will remain.
If you make changes to file1, such as adding text, these changes will appear in file2, and
if you make changes to file2, file1 will also be updated. You should also know that although
you can see two files, each 14 characters in size, only 14 characters of hard drive space are used
(OK, technically more than that, but it depends on the block size of the partition or type of
hard drive).

On the other hand, although a symbolic link can be useful, it also has a drawback. The next
examples show you why. First, to create a symbolic link, use the ln command -s option:

# ln -s file1 file2

# ls -l file*

 This tells you that file2 is a symbolic link to
file1. Also note that file2 is shorter than file1 . Symbolic links are different from hard links
in that a symbolic link is merely a pathname, or alias, to the original file. Nothing happens
to the original file if you delete the symbolic link. However, if you delete the original file, your
symbolic link won’t help you at all:

# rm -f file1

# cat file2

cat: file2: No such file or directory

mc command

 called Midnight Commander, is a graphical interface for handling files.type the following at the command line:

# mc


GREP command


This section introduces you to the family of grep commands. You’ll learn about grep, egrep,
and fgrep. In order to use these commands, you should know how to use some of the pattern-
matching techniques already discussed. You’ll use these commands to search through files
and extract text. Each of these programs works by searching each line in a file. You can search
single files or a range of files.


Regular expressions are patterns, using special syntax, that match strings (usually in text in
files, unless your search is for filenames). There are also extended regular expressions, but the
difference, important for syntax, should not deter you from the valuable lesson of being able
to construct patterns that will accurately match your desired search targets. This is important
if you’re looking for text in files, and critical if you’re performing potentially dangerous tasks,
such as multiple file deletions across your system.
You can build an infinite number of regular expressions using only a small subset of pattern
characters. Here’s a short list of some of these characters. You should be familiar with at least
one (the asterisk) from the previous examples:

*                       Matches any character
?                    
or .
[xxx]
or [x-x]
\x
^pattern
$pattern

Matches a character in a range of characters
Matches a character such as ? or \
Matches pattern to the beginning of a line
Matches pattern to the end of a line


Each of the grep commands is basically the same and has nearly 20 different command-line
options. The only real difference is that egrep uses a slightly different syntax for its pattern
matching, whereas the frep command uses fixed strings. You’ll see examples of each program,
using some of the common options. For specific details about the pattern-matching
capabilities of these programs, see the grep manual page.

following syntax:

# grep ^[0-9] guide.txt

1 Introduction to Linux
2 Obtaining and Installing Linux
3 Linux Tutorial
4 System Administration
...
# egrep ^[0-9] guide.txt

1 Introduction to Linux
2 Obtaining and Installing Linux
3 Linux Tutorial
4 System Administration
...
# fgrep ^[0-9] guide.txt
1
40
85
137
1
40
85
137

You can see that grep and egrep returned a search (I’ve deleted all the output except the first
four lines). Notice, however, that fgrep cannot handle regular expressions. You must use
fixed patterns, or strings with the fgrep command, for example:


Monday, 26 May 2014

ADVANTAGES & LIMITATIONS(friend or foe)

CHAPTER 5
ADVANTAGES & LIMITATIONS
5.1 ADVANTAGES
1. Mainly it is used in military & police forces.
2. It reduces the miscommunication and aggressive battle tactics, represent an opportunity to minimize the human cost of war.
3. The system is designed to be usable by all active personnel in the field. Since our hardware is intended to be mounted on existing equipment, there is no special requirement for using it.
4. Our sound-based alert system might pose a challenge for the hearing-impaired; the buzzer may be replaced by a small vibrating motor to provide feedback instead.
 5. The beam diameter is less than 1cm, hence it doesn’t need protective eyewear for normal operation.

5.2  Limitations

1. The IFF system uses a 2.5mW Class 3A laser. Lasers of this class can damage the retina if looked at directly for over two minutes.
2. Friendly fire detection is done on a narrow electromagnetic spectrum. This means that multiple laser signals arriving at the same receiver will cause the signals to overlap and potentially generate an invalid registration, rendering the system not working.

3. Although one option is to use a different part of the electromagnetic spectrum for each transmitting laser module, we quickly determined that it was too costly and unfeasible. One solution is to create a sparse signal, such that the laser is only activated once every few milliseconds. This ensures that when many lasers can be pointed at the receiver at once, each individual signal pattern will still be detected. Because the period between signal firings will differ across various transmit modules, we are confident that each signal will eventually have a period of time where it can register against the receiver. We can further improve collision management by having an exponential back-off period upon collision.
CONCLUSIONS:
If the soldier is equipped with this model can communicate properly within a preset range of the parallel TX,RX. Outside the range of parallel TX,RX, its tough to communicate and information obtained is not proper. Encrypted laser friend-foe identification technology can be battery equipped. Communication between soldiers is proper within the range of parallel TX,RX or ZIGBEE module. 
FUTURE SCOPE:

In some cases we can replace parallel TX,RX by ZIGBEE module. So that the soldiers can communicate over a larger distance. 

SYSTEM ARCHITECTURE(friend or foe)

CHAPTER 3

3.1 SYSTEM ARCHITECTURE

















Figure 3.1 shows the system Architecture.
 The grayed boxes in Figure 1 are parts of the system implemented in software on a microcontroller unit (MCU). The white boxes are implemented in hardware. A soldier equipped with the IFF system has a responder unit on their body armor, and an initiator unit mounted on their rifle. The rifle module uses a laser to transmit an encoded message. If the rifle points toward a friendly soldier, phototransistors mounted on the target soldier’s body armor will detect an incident laser beam. An MCU in the responder unit will decrypt the laser message using a pre-set private key. If decryption is successful, the MCU identifies which friendly soldier is currently aiming at the target soldier, and broadcasts this information in RF. RF receivers in the initiator units of all friendly soldiers within firing range parse this information to determine if the rifle they are mounted on is pointing toward a friendly soldier. If potential fratricide is detected, a buzzer mounted on the rifle goes off, signaling that the current rifle position might result in friendly fire. This feedback signal used to trigger the buzzer may also be used to prevent the next round from being loaded in the weapon’s firing chamber.

3.1.1 Initiator unit:
Fig.3.1.1 Block diagram of the initiator unit.
The initiator unit generates a laser signal along the weapon's line of firing. The microcontroller generates a PWM signal that operates a MOSFET switch, which in turn regulates the laser output. This creates a laser beam that appears to be steady to the observer, but is actually a very rapid series of pulses with an interval unique to each soldier.
1.    230 volt, AC input voltage is stepped down to 12V AC using a step down transformer. AC 12 volt is rectified to DC 12 volt using an bridge rectifier and then filtered.
2.    Voltage regulator LM7812 is used to regulate the voltage to 5 volt DC. And this voltage is given as input the micro controller, laser and to the buzzer circuit. Oscillator works at 20 MHz and supplies pulses to the micro controller.
3.    If the code which is stored in the microcontroller is in decimal number. So that before transmitting it to RF transmitter circuit, code should be converted into binary format. This conversion from decimal to binary is implemented while writing the program in micro c software. Hence the encrypted message is sent to responder unit via RF signal.

Fig 3.1.2 shows the flow chart of sender in initiator part of micro controller
3.1.2Responder part:

Fig. 3.1.3 shows the responder unit.
1.    The responder unit is responsible for registering laser signals that land on a user. Photo transistor is being hit by the laser module. Phototransistor is biased as a common collector, with the collector connected to an op-amp to amplify the signal. A potentiometer is used to determine the correct biasing voltage for the op-amp.  
2.    Supply to the responder unit is same as given in the initiator unit. It consists of mainly RF receiver and parallel TX circuit.
3.    Encrypted code received by the RF receiver circuit is decrypted using preset private key of the micro controller. If this code matches with the already stored code in the microcontroller of responder unit. Then there exists a feedback signal to the initiator unit making RFr=1 (High). Buzzer buzzes.
4.    Else, If received code does not matches with the already stored code in the microcontroller of responder unit. Then there no feedback signal to the initiator unit making RFr=0 (High). Buzzer does not buzz.

Fig 3.1.4 shows the flow chart of receiver in responder part of micro controller








3.2 EXPERIMENATL SET UP
3.2.1 INITIATOR MODEL

3.2.2 RESPONDER MODEL

CHAPTER 4
DISCUSSIONS OF RESULTS

The encrypted code sent from the initiator unit in the form of RF signal is received by the phototransistor of the responder unit. In case (i) valid code is sent and the buzzer buzzes (i.e. both the unit of microcontroller has same code). In case (ii) invalid code is sent from the initiator unit. So buzzer does not buzz (i.e. code which is stored in both the units are different).