If you want to make money online : Register now

1. Difference between stored procedure and function

2. Index types in SQL Server

3. How you will take backup of your database?

4. How many types of memories are there in .net? 

5. Which controls you used in your project?

6. Can you Explain Page lifecycle in .net?

7. Can you Explain .NET architecture in .net?

8. What is the difference between primary key and unique key with not null?

9. What is session? Explain login form.

10. What is 3-tier architecture of .net application?

Operating Systems

Non contiguous memory allocation methodology does require that a file be termed at the start. The file grows as needed with time. A major advantage is the reduced waste of disk space and flexibility when it comes to memory allocation. The Operating System will allocation memory to the file when needed.
Non contiguous memory allocation, offers the following advantages over contiguous memory allocation:
  • Allows the interdependence of code and data among processes.
  • External fragmentation is none existent with non contiguous memory allocation.
  • Virtual memory allocation is strongly supported in non contiguous memory allocation.
Non contiguous memory allocation methods include Paging and Segmentation.

Paging

Paging is a non contiguous memory allocation method in which physical memory is divided into fixed sized blocks called frames of size in the power of 2, ranging from 512 to 8192 bytes. Logical memory is also divided into same size blocks called pages. For a program of size n pages to be executed, n free frames are needed to load the program.
Some of the advantages and disadvantages of paging as noted by Dhotre include the following:
  • On advantages:
  • Paging Eliminates Fragmentation
  • Multiprogramming is supported
  • Overheads that come with compaction during relocation are eliminated
  • Some disadvantages that include:
  • Paging increases the price of computer hardware, as page addresses are mapped to hardware
  • Memory is forced to store variables like page tables
  • Some memory space stays unused when available blocks are not sufficient for address space for jobs to run

Segmentation

Segmentation is a non contiguous memory allocation technique that supports a user view of memory. A program is seen as a collection of segments such as main program, procedures, functions, methods, stack, objects, etc.
Some of the advantages and disadvantages of segmentation as noted by Godse et al include the following:
  • On advantages:
  • Fragmentation is eliminated in Segmentation memory allocation
  • Segmentation fully supports virtual memory
  • Dynamic memory segment growth is fully supported
  • Segmentation supports Dynamic Linking
  • Segmentation allows the user to view memory in a logical sense.

  • On the disadvantages of segmentation:
  • Main memory will always limit the size of segmentation, that is, segmentation is bound by the size limit of memory
  • It is difficult to manage segments on secondary storage
  • Segmentation is slower than paging
  • Segmentation falls victim to external fragmentation even though it eliminates internal fragmentation
Problem Detail: 

A typical HDD would represent information as either 1 (e.g. spin up) or 0 (e.g. spin down). Let's assume you want to represent the information physically in a hex system with 16 states, and assume this is possible with using some physical form (maybe the same spin).

What is the minimum physical size of a memory element in this new system in units of binary bits? It seems to me that the minimum is 8 bits = 1 byte. Therefore, going from a binary representation to a higher representation will, everything else equal, make the minimum variable size equal 1 byte instead of 1 bit. Is this logic correct?

Asked By : student1
Answered By : Yuval Filmus

One hexadecimal digit contains 4 binary digits. You can compute this as follows: $\log_2 16 = 4$. Alternatively, $2^4 = 16$. So the minimal memory element will contain 4 bits' worth of information.

This also works when the number of states is not a power of 2, but you have to be more flexible in your interpretation.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/52233

3200 people like this

 Download Related Notes/Documents

Problem Detail: 

To avoid the noise as much as possible, I'm planning to take multiple scenes from RGB-D then try to merge them ....

so is there any research papers , thoughts , ideas , algorithms or anything to help

Asked By : Mohammad Eliass Alhusain
Answered By : D.W.

Yes, one technique is known as super-resolution imaging. There's a rich literature on the subject, at least for RGB images. You could check Google Scholar to see if there has been any research on super-resolution for RGB-D images (e.g., from 3D cameras such as Kinect, Intel RealSense, etc.).

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/56093

3200 people like this

 Download Related Notes/Documents

Problem Detail: 

There is a some question that arise from the proof of Lemma 5.8.1 of Cover's book on information theory that confuse me.

First question is why he assumes that we can "Consider an optimal code $C_m$. Is he assuming that we are encoding a finite number of words so that $\sum p_i l_i$ must have a minimum value? Here I give you the relevant snapshot.

enter image description here

Second, there is an observation made on this notes that was also done in my class before proving the theory of optimality of huffman codes, that is,

observe that earlier results allow us to restrict our attention to instantaneously decodeable codes

I don't really understand why this observation is necessary.

Asked By : Rodrigo
Answered By : Yuval Filmus

To answer your first question, the index $i$ goes over the range $1,\ldots,m$. The assumption is that there are finitely many symbols. While some theoretical papers consider encodings of countably infinite domains (such as universal codes), usually the set of symbols is assumed to be finite.

To answer your second question, the claim is that a Huffman code has minimum redundancy among the class of uniquely decodable codes. The proof of Theorem 10 in your notes, however, only directly proves that a Huffman code has minimum redundancy among the class of instantaneously decodable codes. It does so when it takes an optimal encoding for $p_1,\ldots,p_{n-2},p_{n-1}+p_n$ and produces an optimal encoding for $p_1,\ldots,p_n$ by adding a disambiguating bit to the codeword corresponding to $p_{n-1}+p_n$; it's not clear how to carry out a similar construction for an arbitrary uniquely decodable code.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/64727

3200 people like this

 Download Related Notes/Documents

Problem Detail: 

Assume that we have $l \leq \frac{u}{v}$ and assume that $u=O(x^2)$ and $v=\Omega(x)$. Can we say that $l=O(x)$?

Thank you.

Asked By : user7060
Answered By : Yuval Filmus

Since $u = O(x^2)$, there exist $N_1,C_1>0$ such that $u \leq C_1x^2$ for all $x \geq N_1$. Since $v = \Omega(x)$, there exist $N_2,C_2>0$ such that $v \geq C_2x$ for all $x \geq N_2$. Therefore for all $x \geq \max(N_1,N_2)$ we have $$ l \leq \frac{u}{v} \leq \frac{C_1x^2}{C_2x} = \frac{C_1}{C_2} x. $$ So $l = O(x)$.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/45258

3200 people like this

 Download Related Notes/Documents

Problem Detail: 

While doing some digging around in the GNU implementation of the C++ standard library I came across a section in bits/hashtabe.h that refers to a hash function "in the terminology of Tavori and Dreizin" (see below). I have tried without success to find information on these people, in the hopes of learning about their hash function -- everything points to online versions of the file that the following extract is from. Can anyone give me some information on this?

*  @tparam _H1  The hash function. A unary function object with *  argument type _Key and result type size_t. Return values should *  be distributed over the entire range [0, numeric_limits<size_t>:::max()]. * *  @tparam _H2  The range-hashing function (in the terminology of *  Tavori and Dreizin).  A binary function object whose argument *  types and result type are all size_t.  Given arguments r and N, *  the return value is in the range [0, N). * *  @tparam _Hash  The ranged hash function (Tavori and Dreizin). A *  binary function whose argument types are _Key and size_t and *  whose result type is size_t.  Given arguments k and N, the *  return value is in the range [0, N).  Default: hash(k, N) = *  h2(h1(k), N).  If _Hash is anything other than the default, _H1 *  and _H2 are ignored. 
Asked By : moarCoffee
Answered By : D.W.

I read that passage as saying that Tavori and Dreizin introduced the terminology/concept of a "range-hashing function". Presumably, that's a name they use for a hash function with some special properties. In other words, I read that as implying not that Tavori and Dreizen introduced a specific hash function, but that they talk about a category of hash functions and gave it a name.

I don't know if that is what the authors actually meant; that's just how i would interpret it.

I tried searching on Google Scholar for these names and found nothing that seemed relevant. A quick search turns up a reference to Ami Tavori at IBM (a past student of Prof. Meir Feder, working on computer science), but I don't know if that's who this is referring to.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/66931

3200 people like this

 Download Related Notes/Documents

Problem Detail: 

In unification, there is a "occur-check". Such as $X = a \, X$ fails to find a substitution for $X$ since it appears on right hand side too. The first-order unification, higher-order unification all have occur-check.

the paper nominal unification described a kind of unification based on nominal concepts. But I did not mention "occur-check" at all.

So, I am thinking why? does it has occur-check?

Asked By : alim
Answered By : alim

Yes, it has the occur check. The ~variable transformation rule of nominal unification has a condition which states

   provided X does not occur in t 

what it is saying is exactly occur check.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/65833

3200 people like this

 Download Related Notes/Documents

Problem Detail: 

In Sipser's text, he writes:

When a probabilistic Turning machine recognizes a language, it must accept all strings in the language and reject all strings not in the language as usual, except that now we allow the machine a small probability of error.

Why is he using "recognizes" instead of "decides"? If the machine rejects all strings that are not in the language, then it always halts, so aren't we restricted to deciders in this case?

The definition goes on:

For $0 < \epsilon < 1/2$ we say that $M$ recognizes language $A$ with error probability $\epsilon$ if

1) $w \in A$ implies $P(M \text{ accepts } w) \ge 1 - \epsilon$, and

2) $w \notin A$ implies $P(M \text{ rejects } w) \ge 1 - \epsilon$.

So it seems like the case of $M$ looping is simply not allowed for probabilistic Turning machines?

Asked By : theQman
Answered By : Yuval Filmus

Complexity theory makes no distinction between "deciding" and "recognizing". The two words are used interchangeably. Turing machines considered in complexity theory are usually assumed to always halt. Indeed, usually only time-bounded machines are considered (such as polytime Turing machines), and these halt by definition.

In your particular case, you can interpret accept as halting in an accepting state, and reject as halting in a rejecting state. The Turing machine is thus allowed not to halt. However, the class BPP also requires the machine to run in polynomial time, that is, to halt in polynomial time. In particular, the machine must always halt.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/65491

3200 people like this

 Download Related Notes/Documents

Problem Detail: 

In Johnson's 1975 Paper 'Finding All the Elementary Circuits of a Directed Graph', his psuedocode refers to two separate data structures, logical array blocked and list array B. What is the difference in them and what do they represent? Moreover, what does 'Vk' mean?

Asked By : Danish Amjad Alvi
Answered By : D.W.

In the pseudocode, T array means an array where each element has type T. Logical is the type of a boolean (i.e., it can hold the value true or false). Integer list is the type of a list of integers.

Thus, in the pseudocode, logical array blocked(n) is the declaration of an array called blocked containing n elements, where each element is a boolean. integer list array B(n) is the declaration of an array called B containing n elements, where each element is a list of integers.

$V_K$ isn't clearly defined, but from context, I'd guess it is the set of vertices in $A_K$.

Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/58180

3200 people like this

 Download Related Notes/Documents