There are lots of ways of being a logical inferentialist. I've harped on a number of times about proof-theoretic harmony and other proof-theoretic constraints that inferentialists favour. In short, these approaches, which mostly concern themselves with finding, in some sense, appropriate pairs of rule-sets (in natural deduction or sequent calculus), are following in the tradition of Nuel Belnap. His was probably the most influential reply to Prior's venomous tonk-paper. Yet, there were a series of other replies, some of which receive little attention in the contemporary debate about proof-theoretic semantics.
People like J.T. Stevenson ("Roundabout the runabout inference-ticket") and Steven Wagner ("Tonk") had---perhaps like Prior himself---little sympathy for proof-theoretic semantics (in its early versions by Popper and Kneale). Both suggested that the real trouble with tonk is its blatant disregard for the underlying truth-conditional semantics (in fact, Stevenson is even looking for truth-functional semantics). These replies were mostly ignored in the harmony industry, for two different reasons, I suspect: First, a recourse to Tarski style semantics was precisely what the proof-theoretic semanticists were trying to avoid. Second, even if you thought there was something to the idea that when a set of rules determine the meaning of a logical connective, it determines a truth-condition, the above papers offered little or no insight into how to formalise this connection. In fact, both appear to be mostly concerned with classical logic (and boolean values).
Yet, their suggestion comes with some merits. First, inferentialists, although they provide an interesting idea of how concept-acquisition works for logical concepts, had more to say about justification of deduction than about the nature of semantic content---other than that it's not truth-conditional à la Tarski. Wagner suggests a worthwhile Fregean distinction (I here adopt some terminology from Hodes "On the sense and reference of a logical constant"): Although inference rules are constitutive of the sense of a logical connective, the reference is not to be identified with the set of inference rules. Rather, the reference is truth-conditional and it is determined by the sense. There is a hope here of having the best of two worlds. On one hand we get a recognised theory of semantic content; on the other hand we can still adopt the inferentialist's thinking about entitlement to inference.
But despite any initial advantages, the problems described above still remain. How precisely does the relation between inference-rules and truth-conditions work? How do the inference rules carve out the semanic content? And, importantly, is this simply a way of spelling out a relationship between classical inferences and classical semantics?
In an attempt to answer these questions I've worked on some technical notions that can help give us the appropriate go-between for the truth-conditions and the inference rules. Standardly, proof-theory and model-theory is kept together by soundness and completeness, but these results tell us little in way of how the rules of the proof-theory determine what the models look like. We can highlight this in a straightforward way. Assume that we have an antecedent idea of some basic semantic concepts, in particular, the difference between our designated and undesignated truth values. However, assume that we have no antecedent notion of the semantic conditions for the logical connectives in question (over and above that they operate on said values). Consider then the example of classical logic, in a standard (single-conclusion) natural deduction system. Carnap showed that such systems are non-categorical in the sense that the inference rules are sound and complete with respect to two distinct classes of boolean valuations. (In fact, to an infinite number of different ones, see Hardegree's "Completeness and Super-Valuations".) Of course, only one of these classes is the admissible class of classical valuations, in the sense that it can be induced from the atomic assignments via the truth-conditional clauses for connectives. However, the point in the inferentialist context is precisely that these clauses cannot be assumed; they must be determined by the proof-theory. In our example, the non-categoricity amounts to a sort of non-uniqueness result for the proof-system at hand (with respect to boolean valuations).
It is natural, then, to think of categoricity as constraint one would like to impose on a proof-system for the rules to determine the truth-conditions. The idea is as follows: If the rules are sound and complete with respect to only one class of valuations, then we have learned something about the semantic content of the connectives; namely that their truth-conditions must respect whatever conditions the class of valuations yields. If we move to a multiple-conclusion system, Shoesmith and Smiley (Multiple-Conclusion Logic) showed us that we get precisely this. A classical multiple conclusion system is only sound and complete with respect to the class of classically admissible valuations. We have, in other words, a way of carving up the truth tables for classical connectives looking exclusively at the inference rules. Of course, we do apply some background assumption about what the matrices contain (e.g., the values), but we make no assumption about the semantic content of, say, negation and disjunction.
What about non-boolean matrices? Without getting into details, it is crucial that the notion of categoricity at play in Shoesmith and Smiley's work is defined with respect to partitions of formulae into those that a valuation takes to 1 and those it takes to 0 (for the boolean values). When we go (finitely) many-valued, on the other hand, the partitions glosses over distinctions that don't cut across the designated/non-designated divide. Put differently, the definitions aren't fine-grained enough. We can learn something about the consequence relation from the inference rules, since consequence (at least normally) is only concerned with preservation of designated values. But, the truth-functions contain more detailed information, that can not be coded by proof-systems that can be categorical in the above sense.
What is called for is the more fine-grained notion of absoluteness. There is interesting work on this in Dunn and Hardegree's Algebraic Methods in Philosophical Logic. Again, without being too precise, absoluteness holds of a class of valuations just in case the logic induced from the class of valuation, itself induces a class of valuation which is identical to the original. I.e., V = V(L(V)), where V and L are functions such that (a) L(V) is the class of arguments not refuted by any valuation v in V; and (b) V(L) is the class of valuations which refutes no argument in L. (A logic is meant in the minimum sense of a class of arguments; of course the shape of them, e.g., single- vs multiple-conclusion (premise), will matter.)
Absoluteness is a stronger notion than categoricity. In fact, whereas Shoesmith and Smiley proves that every finitely many-valued logic is categorical with respect to a multiple-conclusion system, the result does not hold for absoluteness. Multiple-conclusion, although sufficient in the boolean case, doesn't hold in general. What is needed, it turns out, are generalisations using n-sided sequent systems, where n is the number of values in the logic in question. Such systems have been explored in much detail in Richard Zach's Proof theory of finite-valued logics.